Fewer Students Learning Arithmetic and Algebra

by Jerome Dancis

This summer, I obtained the college remediation data for my state of Maryland. Well just 2014, the latest available. So BCC i.e. before Common Core became the state tests in Maryland.

Does anyone know of similar data for other states?

Fewer Students Learning Arithmetic and Algebra

Analysis based on data by Maryland Higher Education Commission’s (MHEC) Student Outcome and Achievement Report (SOAR).

The data for my state of Maryland (MD) is: (This data may be typical for many of the 45 states, which adapted the NCTM Standards.)

Decline in Percent of Freshmen Entering Colleges in Maryland, Who Knew Arithmetic and Real High School Algebra I.

                                              1998        2005        2006        2014

Whites                                  67%         60%         58%          64%

African-Americans            44%         33%         36%          37%

Hispanics                            56%         42%         43%          44%

See my [Univ. of Maryland] Faculty Voice article,

More Remedial Math [at MD Colleges]? [YES]

scroll down to bottom of Page 1

Caveat. This data describes only those graduates of Maryland high schools in 1998, 2005, 2006 and 2014, who entered a college in Maryland the same year.

Related Data. From 1998 to 2005, the number of white graduates increased by 11% (from 14,473 to 16,127), but the number who knew arithmetic and high school algebra I decreased (from 9703 to 9619) (as determined by college placement tests).

Similarly, from 1998 to 2005, the number of African-American graduates who were minimally ready for college Math went down in spite of increased college enrollments of females by 21% and males by 31%.

One of the likely causes for the downturn: High school Algebra I used to be the Algebra course colleges expected. Under the specter of the MD School Assessments (MSAs) and High School Assessments (HSAs), school administrators have been bending the instructional programs out of shape in order to teach to the state tests. The MSAs on math and the MD Voluntary Math Curriculum marginalizes Arithmetic, thereby not allocating sufficient time for too many students to learn Arithmetic. Arithmetic lessons were largely Arithmetic with calculator. The MD HSA on Algebra was Algebra with graphing calculator. The MD HSA on Algebra avoided the arithmetic and arithmetic-based Algebra students would need in college, such as knowing that 3x + 2x = 5x and knowing 9×8 = 72. I nick-named it The MD HSA on “Pretend Algebra” .

Fewer Students Learning Arithmetic and Algebra was originally published on Nonpartisan Education Blog

Fewer Students Learning Arithmetic and Algebra was originally published on Nonpartisan Education Blog

Cognitive Science and the Common Core

New in the Nonpartisan Education Review:

Cognitive Science and the Common Core Mathematics Standards

by Eric A. Nelson

Abstract

Between 1995 and 2010, most U.S. states adopted K–12 math standards which discouraged memorization of math facts and procedures.  Since 2010, most states have revised standards to align with the K–12 Common Core Mathematics Standards (CCMS).  The CCMS do not ask students to memorize facts and procedures for some key topics and delay work with memorized fundamentals in others.

Recent research in cognitive science has found that the brain has only minimal ability to reason with knowledge that has not previously been well-memorized.  This science predicts that students taught under math standards that discouraged initial memorization for math topics will have significant difficulty solving numeric problems in mathematics, science, and engineering.  As one test of this prediction, in a recent OECD assessment of numeracy skills among 22 developed-world nations, U.S. 16–24 year olds ranked dead last.  Discussion will include steps that can be taken to align K–12 state standards with practices supported by cognitive research.

Cognitive Science and the Common Core was originally published on Nonpartisan Education Blog

Cognitive Science and the Common Core was originally published on Nonpartisan Education Blog

Fordham Institute’s pretend research

The Thomas B. Fordham Institute has released a report, Evaluating the Content and Quality of Next Generation Assessments,[i] ostensibly an evaluative comparison of four testing programs, the Common Core-derived SBAC and PARCC, ACT’s Aspire, and the Commonwealth of Massachusetts’ MCAS.[ii] Of course, anyone familiar with Fordham’s past work knew beforehand which tests would win.

This latest Fordham Institute Common Core apologia is not so much research as a caricature of it.

  1. Instead of referencing a wide range of relevant research, Fordham references only friends from inside their echo chamber and others paid by the Common Core’s wealthy benefactors. But, they imply that they have covered a relevant and adequately wide range of sources.
  2. Instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employ “a brand new methodology” specifically developed for Common Core, for the owners of the Common Core, and paid for by Common Core’s funders.
  3. Instead of suggesting as fact only that which has been rigorously evaluated and accepted as fact by skeptics, the authors continue the practice of Common Core salespeople of attributing benefits to their tests for which no evidence exists
  4. Instead of addressing any of the many sincere, profound critiques of their work, as confident and responsible researchers would do, the Fordham authors tell their critics to go away—“If you don’t care for the standards…you should probably ignore this study” (p. 4).
  5. Instead of writing in neutral language as real researchers do, the authors adopt the practice of coloring their language as so many Common Core salespeople do, attaching nice-sounding adjectives and adverbs to what serves their interest, and bad-sounding words to what does not.

1.  Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the Core and its associated testing programs.[iii] A cursory search through the Gates Foundation web site reveals $3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional $653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio ($2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

  • the Human Resources Research Organization (HumRRO),[vi]
  • the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the test evaluation “Criteria.”[vii]
  • the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of one of the federally-subsidized Common Core-aligned testing programs, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
  • Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

The Common Core’s grandees have always only hired their own well-subsidized grantees for evaluations of their products. The Buros Center for Testing at the University of Nebraska has conducted test reviews for decades, publishing many of them in its annual Mental Measurements Yearbook for the entire world to see, and critique. Indeed, Buros exists to conduct test reviews, and retains hundreds of the world’s brightest and most independent psychometricians on its reviewer roster. Why did Common Core’s funders not hire genuine professionals from Buros to evaluate PARCC and SBAC? The non-psychometricians at the Fordham Institute would seem a vastly inferior substitute, …that is, had the purpose genuinely been an objective evaluation.

2.  A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of North American testing experts is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.[x]

Had Fordham compared the tests using the Standards for Educational and Psychological Testing (or any of a number of other widely-respected test evaluation standards, guidelines, or protocols[xi]) SBAC and PARCC would have flunked. They have yet to accumulate some the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest they will fail on all three counts.[xii]

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-owns the Common Core standards and co-sponsored their development (Council of Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others.[xiii],[xiv] Thus, Fordham compares SBAC and PARCC to other tests according to specifications that were designed for SBAC and PARCC.[xv]

The authors write “The quality and credibility of an evaluation of this type rests largely on the expertise and judgment of the individuals serving on the review panels” (p.12). A scan of the names of everyone in decision-making roles, however, reveals that Fordham relied on those they have hired before and whose decisions they could safely predict. Regardless, given the evaluation criteria employed, the outcome was foreordained regardless whom they hired to review, not unlike a rigged election in a dictatorship where voters’ decisions are restricted to already-chosen candidates.

Still, PARCC and SBAC might have flunked even if Fordham had compared tests using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 14 of the criteria.[xvi] And those just happened to be criteria mostly favoring PARCC and SBAC.

Without exception the Fordham study avoided all the evaluation criteria in the categories:

“Meet overall assessment goals and ensure technical quality”,

“Yield valuable reports on student progress and performance”,

“Adhere to best practices in test administration”, and

“State specific criteria”[xvii]

What types of test characteristics can be found in these neglected categories? Test security, providing timely data to inform instruction, validity, reliability, score comparability across years, transparency of test design, requiring involvement of each state’s K-12 educators and institutions of higher education, and more. Other characteristics often claimed for PARCC and SBAC, without evidence, cannot even be found in the CCSSO criteria (e.g., internationally benchmarked, backward mapping from higher education standards, fairness).

The report does not evaluate the “quality” of tests, as its title suggests; at best it is an alignment study. And, naturally, one would expect the Common Core consortium tests to be more aligned to the Common Core than other tests. The only evaluative criteria used from the CCSSO’s Criteria are in the two categories “Align to Standards—English Language Arts” and “Align to Standards—Mathematics” and, even then, only for grades 5 and 8.

Nonetheless, the authors claim, “The methodology used in this study is highly comprehensive” (p. 74).

The authors of the Pioneer Institute’s report How PARCC’s false rigor stunts the academic growth of all students,[xviii] recommended strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also did not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that the familiar multiple-choice/short answer/essay standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xix]. Ironically, it is they—opponents of traditional testing content and formats—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

”Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xx]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is simulated by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xxi]

PARCC, SBAC, and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two. It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC and SBAC tests “deeper” than others. In practice, the alleged deeper parts are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxiii] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

3.  The authors continue the Common Core sales tendency of attributing benefits to their tests for which no evidence exists. For example, the Fordham report claims that SBAC and PARCC will:

“make traditional ‘test prep’ ineffective” (p. 8)

“allow students of all abilities, including both at-risk and high-achieving youngsters, to demonstrate what they know and can do” (p. 8)

produce “test scores that more accurately predict students’ readiness for entry-level coursework or training” (p. 11)

“reliably measure the essential skills and knowledge needed … to achieve college and career readiness by the end of high school” (p. 11)

“…accurately measure student progress toward college and career readiness; and provide valid data to inform teaching and learning.” (p. 3)

eliminate the problem of “students … forced to waste time and money on remedial coursework.” (p. 73)

help “educators [who] need and deserve good tests that honor their hard work and give useful feedback, which enables them to improve their craft and boost their students’ success.” (p. 73)

The Fordham Institute has not a shred of evidence to support any of these grandiose claims. They share more in common with carnival fortune telling than empirical research. Granted, most of the statements refer to future outcomes, which cannot be known with certainty. But, that just affirms how irresponsible it is to make such claims absent any evidence.

Furthermore, in most cases, past experience would suggest just the opposite of what Fordham asserts. Test prep is more, not less, likely to be effective with SBAC and PARCC tests because the test item formats are complex (or, convoluted), introducing more “construct irrelevant variance”—that is, students will get lower scores for not managing to figure out formats or computer operations issues, even if they know the subject matter of the test. Disadvantaged and at-risk students tend to be the most disadvantaged by complex formatting and new technology.

As for Common Core, SBAC, and PARCC eliminating the “problem of” college remedial courses, such will be done by simply cancelling remedial courses, whether or not they might be needed, and lowering college entry-course standards to the level of current remedial courses.

4.  When not dismissing or denigrating SBAC and PARCC critiques, the Fordham report evades them, even suggesting that critics should not read it: “If you don’t care for the standards…you should probably ignore this study” (p. 4).

Yet, cynically, in the very first paragraph the authors invoke the name of Sandy Stotsky, one of their most prominent adversaries, and a scholar of curriculum and instruction so widely respected she could easily have gotten wealthy had she chosen to succumb to the financial temptation of the Common Core’s profligacy as so many others have. Stotsky authored the Fordham Institute’s “very first study” in 1997, apparently. Presumably, the authors of this report drop her name to suggest that they are broad-minded. (It might also suggest that they are now willing to publish anything for a price.)

Tellingly, one will find Stotsky’s name nowhere after the first paragraph. None of her (or anyone else’s) many devastating critiques of the Common Core tests is either mentioned or referenced. Genuine research does not hide or dismiss its critiques; it addresses them.

Ironically, the authors write, “A discussion of [test] qualities, and the types of trade-offs involved in obtaining them, are precisely the kinds of conversations that merit honest debate.” Indeed.

5.  Instead of writing in neutral language as real researchers do, the authors adopt the habit of coloring their language as Common Core salespeople do. They attach nice-sounding adjectives and adverbs to what they like, and bad-sounding words to what they don’t.

For PARCC and SBAC one reads:

“strong content, quality, and rigor”

“stronger tests, which encourage better, broader, richer instruction”

“tests that focus on the essential skills and give clear signals”

“major improvements over the previous generation of state tests”

“complex skills they are assessing.”

“high-quality assessment”

“high-quality assessments”

“high-quality tests”

“high-quality test items”

“high quality and provide meaningful information”

“carefully-crafted tests”

“these tests are tougher”

“more rigorous tests that challenge students more than they have been challenged in the past”

For other tests one reads:

“low-quality assessments poorly aligned with the standards”

“will undermine the content messages of the standards”

“a best-in-class state assessment, the 2014 MCAS, does not measure many of the important competencies that are part of today’s college and career readiness standards”

“have generally focused on low-level skills”

“have given students and parents false signals about the readiness of their children for postsecondary education and the workforce”

Appraising its own work, Fordham writes:

“groundbreaking evaluation”

“meticulously assembled panels”

“highly qualified yet impartial reviewers”

Considering those who have adopted SBAC or PARCC, Fordham writes:

“thankfully, states have taken courageous steps”

“states’ adoption of college and career readiness standards has been a bold step in the right direction.”

“adopting and sticking with high-quality assessments requires courage.”

 

A few other points bear mentioning. The Fordham Institute was granted access to operational SBAC and PARCC test items. Over the course of a few months in 2015, the Pioneer Institute, a strong critic of Common Core, PARCC, and SBAC, appealed for similar access to PARCC items. The convoluted run-around responses from PARCC officials excelled at bureaucratic stonewalling. Despite numerous requests, Pioneer never received access.

The Fordham report claims that PARCC and SBAC are governed by “member states”, whereas ACT Aspire is owned by a private organization. Actually, the Common Core Standards are owned by two private, unelected organizations, the Council of Chief State School Officers and the National Governors’ Association, and only each state’s chief school officer sits on PARCC and SBAC panels. Individual states actually have far more say-so if they adopt ACT Aspire (or their own test) than if they adopt PARCC or SBAC. A state adopts ACT Aspire under the terms of a negotiated, time-limited contract. By contrast, a state or, rather, its chief state school officer, has but one vote among many around the tables at PARCC and SBAC. With ACT Aspire, a state controls the terms of the relationship. With SBAC and PARCC, it does not.[xxiv]

Just so you know, on page 71, Fordham recommends that states eliminate any tests that are not aligned to the Common Core Standards, in the interest of efficiency, supposedly.

In closing, it is only fair to mention the good news in the Fordham report. It promises on page 8, “We at Fordham don’t plan to stay in the test-evaluation business”.

 

[i] Nancy Doorey & Morgan Polikoff. (2016, February). Evaluating the content and quality of next generation assessments. With a Foreword by Amber M. Northern & Michael J. Petrilli. Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/publications/evaluating-the-content-and-quality-of-next-generation-assessments

[ii] PARCC is the Partnership for Assessment of Readiness for College and Careers; SBAC is the Smarter-Balanced Assessment Consortium; MCAS is the Massachusetts Comprehensive Assessment System; ACT Aspire is not an acronym (though, originally ACT stood for American College Test).

[iii] The reason for inventing a Fordham Institute when a Fordham Foundation already existed may have had something to do with taxes, but it also allows Chester Finn, Jr. and Michael Petrilli to each pay themselves two six figure salaries instead of just one.

[iv] http://www.gatesfoundation.org/search#q/k=Fordham

[v] See, for example, http://www.ohio.com/news/local/charter-schools-misspend-millions-of-ohio-tax-dollars-as-efforts-to-police-them-are-privatized-1.596318 ; http://www.cleveland.com/metro/index.ssf/2015/03/ohios_charter_schools_ridicule.html ; http://www.dispatch.com/content/stories/local/2014/12/18/kasich-to-revamp-ohio-laws-on-charter-schools.html ; https://www.washingtonpost.com/news/answer-sheet/wp/2015/06/12/troubled-ohio-charter-schools-have-become-a-joke-literally/

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 23 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2016 collectively exceeding $100 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=CCSSO

[viii] http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Stanford%20Center%20for%20Opportunity%20Policy%20in%20Education%22

[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding $13 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Student%20Achievement%20Partners%22

[x] The authors write that the standards they use are “based on” the real Standards. But, that is like saying that Cheez Whiz is based on cheese. Some real cheese might be mixed in there, but it’s not the product’s most distinguishing ingredient.

[xi] (e.g., the International Test Commission’s (ITC) Guidelines for Test Use; the ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores; the ITC Guidelines on the Security of Tests, Examinations, and Other Assessments; the ITC’s International Guidelines on Computer-Based and Internet-Delivered Testing; the European Federation of Psychologists’ Association (EFPA) Test Review Model; the Standards of the Joint Committee on Testing Practices)

[xii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. http://www.eddatafrominnes.com/index.html ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. http://www.mdpolicy.org/docLib/20051030Education.pdf ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. http://www.montgomeryschoolsmd.org/press/index.aspx?pagetype=showrelease&id=644 ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. http://www.humrro.org/corpsite/page/linking-teacher-practice-statewide-assessment-education

[xiii]http://www.ccsso.org/Documents/2014/CCSSO Criteria for High Quality Assessments 03242014.pdf

[xiv] A rationale is offered for why they had to develop a brand new set of test evaluation criteria (p. 13). Fordham claims that new criteria were needed, which weighted some criteria more than others. But, weights could easily be applied to any criteria, including the tried-and-true, preexisting ones.

[xv] For an extended critique of the CCSSO Criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xvi] Doorey & Polikoff, p. 14.

[xvii] MCAS bests PARCC and SBAC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xviii] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xix] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xx] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. https://edpolicy.stanford.edu/publications/pubs/847

[xxi] McQuillan, Phelps, & Stotsky, p. 46.

[xxiii] Linda Darling-Hammond, et al., pp. 16-18. https://edpolicy.stanford.edu/publications/pubs/847

[xxiv] For an in-depth discussion of these governance issues, see Peter Wood’s excellent Introduction to Drilling Through the Core, http://www.amazon.com/gp/product/0985208694

Fordham Institute’s pretend research was originally published on Nonpartisan Education Blog

Fordham Institute’s pretend research was originally published on Nonpartisan Education Blog

Overtesting or Overcounting?

Commenting on the Center for American Progress’s (CAP’s) report, Testing Overload in America’s Schools,

https://www.americanprogress.org/issues/education/report/2014/10/16/99073/testing-overload-in-americas-schools/

…and the Education Writers’ Association coverage of it,

http://www.ewa.org/blog-ed-beat/how-much-time-do-students-spend-taking-tests

… Some testing opponents have always said there is overtesting, no matter how much there has been actually (just like they have always said there is a “growing backlash” against testing). Given limited time, I will examine only one of the claims made in the CAP report:

“… in the Jefferson County school district in Kentucky, which includes Louisville, students in grades 6-8 were tested approximately 20 times throughout the year. Sixteen of these tests were district level assessments.” (p.19)

A check of the Jefferson County school district web site –

http://www.jefferson.k12.ky.us/Departments/testingunit/FORMS/1415JCPSSYSWIDEASCAL.pdf

reveals the following: there are no district-developed standardized tests – NONE. All systemwide tests are either state developed or national exams.

Moreover, regular students in grades 6 and 7 take only one test per year – ONE – the K-Prep, though it is a full-battery test (i.e., five core subjects) with only one subject tested per day. (No, each subtest does not take up a whole day; more likely each subtest takes 1-1.5 hours, but slower students are given all morning to finish while the other students study something else in a different room and the afternoon is used for instruction.) So, even if you (I would say, misleadingly) count each subtest as a whole test, the students in grades 6 and 7 take only 5 tests during the year, none of them district tests.

So, is the Center for American Progress lying to us? Depends on how you define it. There is other standardized testing in grades 6 and 7. There is, for example, the “Alternate K-Prep” for those with disabilities, but students without disabilities don’t take it and students with disabilities don’t take the regular K-Prep.

Also there is the “Make-up K-Prep” which is administered to the regular students who were sick during the regular K-Prep administration times. But, students who took the K-Prep during the regular administration do not take the Make-up K-Prep.

There are also the ACCESS for ELLs and Alternate ACCESS for ELLs tests administered in late January and February, but only to English Language Learners. ACCESS is used to help guide the language training and course placement of ELL (or, ESL) students. Only a Scrooge would begrudge the district using these tests as “overtesting.”

And, that’s it. To get to 20 tests a year, the CAP had to assume that each and every student took each and every subtest. They even had to assume that the students sick during the regular K-Prep administration were not sick, and that all students who took the regular K-Prep also took the Make-up K-Prep.

Counting tests in US education has been this way for at least a quarter-century. Those prone to do so goose the numbers any way they plausibly can. A test is given in grade 5 on Tuesday? Count all students in the school as being tested. A DIBELS test takes all of one minute to administer? Count a full class period as lost. A 3-hour ACT has five sub-sections? That counts as five tests. Only a small percentage of schools in the district are sampled to take the National Assessment of Educational Progress in one or two grades? Count all students in all grades in the district as being tested, and count all the subjects tested individually.

Critics have gotten away with this fibbing for so long it has become routine–the standard way to count the amount of testing. And, reporters tend to pass it along as fact.

Richard P. Phelps

Overtesting or Overcounting? was originally published on Nonpartisan Education Blog

Overtesting or Overcounting? was originally published on Nonpartisan Education Blog

Overtesting or Overcounting? was originally published on Nonpartisan Education Blog

Kamenetz, A. (2015). The Test: Why our schools are obsessed with standardized testing—but you don’t have to be. New York: Public Affairs. Book Review, by Richard P. Phelps

Perhaps it is because I avoid most tabloid journalism that I found journalist Anya Kamenetz’s loose cannon Introduction to The Test: Why our schools are obsessed with standardized testing—but you don’t have to be so jarring. In the space of seven pages, she employs the pejoratives “test obsession”, “test score obsession”, “testing obsession”, “insidious … test creep”, “testing mania”, “endless measurement”, “testing arms race”, “high-stakes madness”, “obsession with metrics”, and “test-obsessed culture”.

Those un-measured words fit tightly alongside assertions that education, or standardized, or high-stakes testing is responsible for numerous harms ranging from stomachaches, stunted spirits, family stress, “undermined” schools, demoralized teachers, and paralyzed public debate, to the Great Recession (pp. 1, 6, 7), which was initially sparked by problems with mortgage-backed financial securities (and parents choose home locations in part based on school average test scores). Oh, and tests are “gutting our country’s future competitiveness,” too (p. 1).

Kamenetz made almost no effort to search for counter evidence[1]: “there’s lots of evidence that these tests are doing harm, and very little in their favor” (p. 13). Among her several sources for information of the relevant research literature are arguably the country’s most prolific proponents of the notion that little to no research exists showing educational benefits to testing.[2] Ergo, why bother to look for it?

Had a journalist covered the legendary feud between the Hatfield and McCoy families, and talked only to the Hatfields, one might expect a surplus of reportage favoring the Hatfields and disfavoring the McCoys, and a deficit of reportage favoring the McCoys and disfavoring the Hatfields.

Looking at tests from any angle, Kamenetz sees only evil. Tests are bad because tests were used to enforce Jim Crow discrimination (p. 63). Tests are bad because some of the first scientists to use intelligence tests were racists (pp. 40-43).

Tests are bad because they employ the statistical tools of latent trait theory and factor analysis—as tens of thousands of social scientists worldwide currently do—but the “eminent paleontologist” Stephen J. Gould doesn’t like them (pp. 46-48). (He argued that if you cannot measure something directly, it doesn’t really exist.) And, by the way, did you know that some of the early 20th-century scientists of intelligence testing were racists? (pp. 48-57)

Tests are bad because of Campbell’s Law: “when a measure becomes a target, it ceases to be a good measure” (p. 5). Such a criticism, if valid, could be used to condemn any measure used evaluatively in any of society’s realm. Forget health and medical studies, sports statistics, Department of Agriculture food monitoring protocols, ratings by Consumers Reports’, Angie’s List, the Food and Drug Administration. None are “good measures” because they are all targets.

Tests are bad because they are “controlled by a handful of companies” (pp. 5, 81), “The testing company determines the quality of teachers’ performance.” (p. 20), and “tests shift control and authority into the hands of the unregulated testing industry” (p. 75). Such criticisms, if valid, could be used to justify nationalizing all businesses in industries with high scale economies (e.g., there are only four big national wireless telephone companies, so perhaps the federal government should take over), and outlaw all government contracting. Most of our country’s roads and bridges, for example, are built by private construction firms under contract to local, state, and national government agencies to their specifications, just like most standardized tests; but who believes that those firms control our roads?

Kamenetz swallows education anti-testing dogma whole. She claims that multiple-choice items can only test recall and basic skills (p. 35), that students learn nothing while they are taking tests (p. 15), and that US students are tested more than any others (pp. 15-17, 75)—and they are if you count the way her information sources do—counting at minimum an entire class period for each test administration, even a one-minute DIBELS test; counting all students in all grades of a school as taking a test whenever any students in any grade are taking a test; counting all subtests independently in the US (e.g., each ACT counts as five because it has five subtests) but only the whole tests for other countries; etc.

Standardized testing absorbs way too much money and time, according to Kamenetz. Later in the book, however, she recommends an alternative education universe of fuzzy assessments that, if enacted, would absorb far more time and money.

What are her solutions to the insidious obsessive mania of testing? There is some Rousseau-an fantasizing—all school should be like her daughter’s happy pre-school where each student learned at his or her own pace (pp. 3-4) and the school’s job was “customizing learning to each student” (p. 8).

Some of the book’s latter half is devoted to “innovative” (of course) solutions that are not quite as innovative as she seems to believe. She is National Public Radio’s “lead digital education reporter” so some interesting new and recent technologies suffuse the recommendations. But, even jazzing up the context, format, and delivery mechanisms with the latest whiz-bang gizmos will not eliminate the problems inherent in her old-new solutions: performance testing, simulations, demonstrations, portfolios, and the like. Like so many Common Core Standards boosters advocating the same “innovations”, she seems unaware that they have been tried in the past, with disastrous results.[3]

As I do not know Ms. Kamenetz personally, I must assume that she is sincere in her beliefs and made her own decisions about what to write. But, if she had naively allowed herself to be wholly misled by those with a vested interest in education establishment doctrine, the end result would have been no different.

The book is a lazily slapped-together rant, unworthy of a journalist. Ironically, however, I agree with Kamenetz on many issues. Like her, I do not much like the assessment components of the old No Child Left Behind Act or the new Common Core Standards. But, my solution would be to repeal both programs, not eliminate standardized testing. Like her, I oppose the US practice of relying on a single proficiency standard for all students (pp. 5, 36). But, my solution would be to employ multiple targets, as most other countries do. She would dump the tests.

Like Kamenetz, I believe it unproductive to devote more than a smidgen of time (at most half a day) to test preparation with test forms and item formats that are separate from subject matter learning. And, like her (p. 194), I am convinced that it does more harm than good. But, she blames the tests and the testing companies for the abomination; in fact, the testing companies prominently and frequently discourage the practice. It is the same testing opponents she has chosen to trust who claim that it works. It serves their argument to claim that non-subject-matter-related test preparation works because, if it were true, it would demonstrate that tests can be gamed with tricks and are invalid measurement instruments.

Like her, I oppose firing teachers based on student test scores, as current value-added measurement (VAM) systems do while there are no consequences for the students. I believe it wrong because too few data points are used and because student effort in such conditions is not reliable, varying by age, gender, socio-economic level, and more. But, I would eliminate the VAM program, or drastically revise it; she would eliminate the tests.

Like Kamenetz, I believe that educators’ cheating on tests is unacceptable, far more common than is publicly known, and should be stopped. I say, stop the cheating. She says, dump the tests.

It defies common sense to have teachers administering high-stakes tests in their own classrooms. Rotating test administration assignments so that teachers do not proctor their own students is easy. Rotating assignments further so that every testing room is proctored by at least two adults is easy, too. So, why aren’t these and other astonishingly easy fixes to test security problems implemented? Note that the education professionals responsible for managing test administrations are often the same who complain that testing is impossibly unfair.

The sensible solution is to take test administration management out of the hands of those who may welcome test administration fiascos, and hire independent professionals with no conflict of interest. But, like many education insiders, Kamenetz would ban the testing; thereby rewarding those who have mismanaged test administrations, sometimes deliberately, with a vacation from reliable external evaluation.

If she were correct on all these issues—that the testing is the problem in each case—shouldn’t we also eliminate examinations for doctors, lawyers, nurses, and pharmacists (all of which rely overwhelmingly on the multiple-choice format, by the way)?

Our country has a problem. More than in most other countries, our public education system is independent, self-contained, and self-renewing. Education professionals staffing school districts make the hiring, purchasing, and school catchment-area boundary-line decisions. School district boundaries often differ from those of other governmental jurisdictions, confusing the electorate. In many jurisdictions, school officials set the dates for votes on bond issues or school board elections, and can do so to their advantage. Those school officials are trained, and socialized, in graduate schools of education.

A half century ago, most faculty in graduate schools of education may have received their own professional training in core disciplines, such as Psychology, Sociology, or Business Management. Today, most education school faculty are themselves education school graduates, socialized in the prevailing culture. The dominant expertise in schools of education can maintain its dominance by hiring faculty who agree with it and denying tenure to those who stray. The dominant expertise in education journals can control education knowledge by accepting article submissions with agreeable results and rejecting those without.

Even most testing and measurement PhD training programs now reside in education schools, inside the same cultural cocoon.

Standardized testing is one of the few remaining independent tools US society has for holding education professionals accountable to serve the public, and not their own, interests. Without valid, reliable, objective external measurement, education professionals can do what they please inside our schools, with our children and our money. When educators are the only arbiters of the quality of their own work, they tend to rate it consistently well.

A substantial portion of The Test’s girth is filled with complaints that tests do not measure most of what students are supposed to or should learn: “It’s math and reading skills, history and science facts that kids are tested and graded on. Emotional, social, moral, spiritual, creative, and physical development all become marginal…” (p. 4). She quotes Daniel Koretz: “These tests can measure only a subset of the goals of education” (p. 14). Several other testing critics are cited making similar claims.

Yet, standards-based tests are developed in a process that takes years, and involves scores of legislators, parents, teachers, and administrators on a variety of decision-making committees. The citizens of a jurisdiction and their representatives choose the content of standards-based tests. They could choose content that Kamenetz and the several other critics she cites prefer, but they don’t.

If the critics are unhappy with test content, they should take their case to the proper authorities, voice their complaints at tedious standards commission hearings, and contribute their time to the rather monotonous work of test framework review committees. I sense that none of that patient effort interests them; instead, they would prefer that all decision-making power be granted to them, ex cathedra, to do as they think best for us.

Moreover, I find some of their assertions about what should be studied and tested rather scary. Our public schools should teach our children emotions, morals, and spirituality?

Likely that prospect would scare most parents, too. But, many parents’ first reaction to a proposal that our schools be allowed to teach their children everything might instead be something like: first show us that you can teach our children to read, write, and compute, then we can discuss further responsibilities.

So long as education insiders insist that we must hand over our money and children and leave them alone to determine—and evaluate—what they do with both, calls for “imploding” the public education system will only grow louder, as they should.

It is bad enough that so many education professors write propaganda, call it research, and deliberately mislead journalists by declaring an absence of countervailing research and researchers. Researchers confident in their arguments and evidence should be unafraid to face opponents and opposing ideas. The researchers Kamenetz trusts do all they can to deny dissenters a hearing.

Another potential independent tool for holding education professionals accountable, in addition to testing, could be an active, skeptical, and inquiring press knowledgeable of education issues and conflicts of interests. Other countries have it. Why are so many US education reporters gullible sycophants?

 

Endnotes:

[1] She did speak with Samuel Casey Carter, the author of No Excuses: Lessons from 21 High-Performing High-Poverty Schools (2000) (pp. 81-84), but chides him for recommending frequent testing without “framing” “the racist origins of standardized testing.” Kamenetz suggests that test scores are almost completely determined by household wealth and dismisses Carter’s explanations as a “mishmash of anecdotal evidence and conservative faith.”

[2] Those sources are Daniel Koretz, Brian Jacob, and the “FairTest” crew. In fact, an enormous research literature revealing large benefits from standardized, high-stakes, and frequent education testing spans a century (Brown, Roediger, & McDaniel, 2014; Larsen & Butler, 2013; Phelps, 2012).

[3] The 1990s witnessed the chaos of the New Standards Project, MSPAP (Maryland), CLAS (California) and KIRIS (Kentucky), dysfunctional programs that, when implemented, were overwhelmingly rejected by citizens, politicians and measurement professionals alike. (Incidentally, some of the same masterminds behind those projects have resurfaced as lead writers for the Common Core Standards.)

 

References:

Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning. Cambridge, MA: Belknap Press.

Larsen, D. P., & Butler, A. C. (2013). Test-enhanced learning. In K. Walsh (Ed.), Oxford textbook of medical education (pp. 443–452). Oxford: Oxford University Press. http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2923.2008.03124.x/full

Phelps, R. P. (2012). The effect of testing on student achievement, 1910–2010. International Journal of Testing, 12(1), 21–43. http://www.tandfonline.com/doi/abs/10.1080/15305058.2011.602920

 

Kamenetz, A. (2015). The Test: Why our schools are obsessed with standardized testing—but you don’t have to be. New York: Public Affairs. Book Review, by Richard P. Phelps was originally published on Nonpartisan Education Blog

Kamenetz, A. (2015). The Test: Why our schools are obsessed with standardized testing—but you don’t have to be. New York: Public Affairs. Book Review, by Richard P. Phelps was originally published on Nonpartisan Education Blog

Kamenetz, A. (2015). The Test: Why our schools are obsessed with standardized testing—but you don’t have to be. New York: Public Affairs. Book Review, by Richard P. Phelps was originally published on Nonpartisan Education Blog

MEDIA BLACKOUT


Will Fitzhugh
The Concord Review
8 February 2014
 
 
In the United States, our media are not allowed to report on or discuss exemplary student academic achievement at the high school level. For example, in the “Athens of America,” The Boston Globe has more than 150 full pages each year on the accomplishments of high school athletes, but only one page a year on academics—a full page with the photographs of valedictorians at the public high schools in the city, giving their name, their school, their country of origin (often 40% foreign-born) and the college they will be going to. 
 
The reasons for this media blackout on good academic work by students at the secondary level are not clear, apart from tradition, but while high school athletes who “sign with” a particular college are celebrated in the local paper, and even on televised national high school games, the names of Intel Science Talent Search winners, of authors published in The Concord Review, and of other accomplished high school scholars may not appear in the paper or on television.
 
Publicity offers encouragement for the sorts of efforts we would like our HS students to make. We naturally publicize high school athletic achievements and this helps to motivate athletes to engage in sports. By contrast, when it comes to good academic work, we don’t mention it, so perhaps we want less of it? 
 
One senior high school history teacher has written that “We actually hide academic excellence from the public eye because that will single out some students and make others ‘feel bad.’”
 
Does revealing excellence by high school athletes make some other athletes or scholar-athletes or high school scholars feel bad? How can we tolerate that? I know there are some Progressive secondary schools which have eliminated academic prizes and honors, to spare the feelings of the students who don’t get them, but I don’t see that they have stopped keeping score in school games, no matter how the losers in those contests may feel.
 
 

SAMPLE MEDIA COVERAGE OF HS ATHLETES

Atlanta Journal-Constitution’s Signing Day Central—By Michael Carvell

11:02 am Wednesday, February 5th, 2014

“Welcome to the AJC’s Signing day Day Central. This is the place to be to catch up with all the recruiting information with UGA, Georgia Tech and recruits from the state of Georgia. We will update the news as it happens, and interact on the message board below.

University of Georgia’s TOP TARGETS FOR WEDNESDAY…AND RESULTS

Lorenzo Carter, DE, 6-5, 240, Norcross: UGA reeled in the big fish, landing the state’s No.1 overall prospect for the first time since 2011 (Josh Harvey-Clemons).   Isaiah McKenzie, WR, 5-8, 175, Ft. Lauderdale (Fla.) American Heritage: This was one of two big surprises for UGA to kick off signing day. McKenzie got a last-minute offer from UGA and picked the Bulldogs because of his best buddy and high school teammate, 5-star Sony Michel (signed with UGA).   Hunter Atkinson, TE, 6-6, 250, West Hall: The Cincinnati commit got a last-minute call from Mark Richt and flipped to UGA. I’m not going to say we saw it coming, but … Atkinson had grayshirt offers from Alabama, Auburn and UCF.   Tavon Ross, S, 6-1, 200, Bleckley County: The Missouri commit took an official visit to UGA but decided to stick with Missouri. He’s signed.   Andrew Williams, DE, 6-4, 247, ECLA: He signed with Auburn over Clemson and Auburn. He joked with Auburn’s Gus Malzahn when he called with the news, saying “I’m sorry to inform you….. That I will be attending your school,” according to 247sports.com’s Kipp Adams.   Tyre McCants, WR-DB, 5-11, 200, Niceville, Fla.: Turned down late interest from UGA to sign with USF.”

This is just the tip of the proverbial iceberg, of course, in the coverage of high school athletes that goes on during the year. I hope readers will email me any comparable examples of the celebration of exemplary high school academic work that they can find in the media in their community, or in the nation generally.

 
 
—————————
“Teach by Example”
Will Fitzhugh [founder]
The Concord Review [1987]
Ralph Waldo Emerson Prizes [1995]
National Writing Board [1998]
TCR Institute [2002]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
978-443-0022; 800-331-5007
Varsity Academics®

MEDIA BLACKOUT was originally published on Nonpartisan Education Blog

MEDIA BLACKOUT was originally published on Nonpartisan Education Blog

MEDIA BLACKOUT was originally published on Nonpartisan Education Blog

Brief sketch of the problem…

In the United States, we pay attention to and celebrate the work of HS athletes.
We carefully ignore the exemplary academic work of diligent HS scholars–the results follow as you might expect—we get what we want.

Will Fitzhugh

———————————
HIGH SCHOOL ATHLETES COLLEGE SIGNING NEWS!!—GEORGIA!!
Atlanta Journal-Constitution
11:02 am Wednesday, February 5th, 2014
AJC’s Signing Day Central

By Michael Carvell

Welcome to the AJC’s Signing day Day Central. This is the place to be to catch up with all the recruiting information with UGA, Georgia Tech and recruits from the state of Georgia. We will update the news as it happens, and interact on the message board below.

UGA’S TOP TARGETS FOR WEDNESDAY…AND RESULTS

Lorenzo Carter, DE, 6-5, 240, Norcross: UGA reeled in the big fish, landing the state’s No.1 overall prospect for the first time since 2011 (Josh Harvey-Clemons).
Isaiah McKenzie, WR, 5-8, 175, Ft. Lauderdale (Fla.) American Heritage: This was one of two big surprises for UGA to kick off signing day. McKenzie got a last-minute offer from UGA and picked the Bulldogs because of his best buddy and high school teammate, 5-star Sony Michel (signed with UGA).
Hunter Atkinson, TE, 6-6, 250, West Hall: The Cincinnati commit got a last-minute call from Mark Richt and flipped to UGA. I’m not going to say we saw it coming, but … Atkinson had grayshirt offers from Alabama, Auburn and UCF.
Tavon Ross, S, 6-1, 200, Bleckley County: The Missouri commit took an official visit to UGA but decided to stick with Missouri. He’s signed.
Andrew Williams, DE, 6-4, 247, ECLA: He signed with Auburn over Clemson and Auburn. He joked with Auburn’s Gus Malzahn when he called with the news, saying “I’m sorry to inform you….. That I will be attending your school,” according to 247sports.com’s Kipp Adams.
Tyre McCants, WR-DB, 5-11, 200, Niceville, Fla.: Turned down late interest from UGA to sign with USF.

UGA COMMITS TO WORRY ABOUT? NOPE

Lamont Gaillard, DT, 6-3, 310, Fayetteville (N.C.) Pine Forest: This was probably the biggest scare on signing day. Gaillard’s coach said he signed with UGA over Miami at 9 a.m but UGA didn’t announce it until 10:35 a.m.
Gilbert Johnson, WR, 6-2, 190, Homestead (Fla.) Senior: Speedster scared UGA after he told Rivals.com on Sunday night that he would sign with Bulldogs, South Florida or Louisville .. and then went MIA. UGA can relax after he was one of team’s first signees.

Kendall Gant, safety, 6-2, 180, Lakeland (Fla.): He flipped from UGA to Marshall on Tuesday due to “academic reasons,” according to his coach, who also claimed his offer “got pulled” by the Bulldogs.

For the rest of UGA’s Big Board for 2014, including a rundown of commitments, go HERE

GEORGIA TECH’S TOP TARGETS FOR WEDNESDAY

Myles Autry, ATH, 5-9, 170, Norcross: Georgia Tech fans are always screaming about wanting to have a high-profile recruit commit on signing day on national TV. Autry picked Georgia Tech over FSU on ESPNU cameras. His older brother plays wide receiver for the Yellow Jackets.
Mike Sawyers, DT, 6-2, 300, Nashville, Tenn.: He signed with Tennessee after taking an official visit to Volunteers on the final weekend before signing day.

For the rest of Georgia Tech’s Big Board for 2014, including a rundown of commitments, go HERE

======================================

FOR COMPARISON, HERE IS SOME EXEMPLARY HS ACADEMIC WORK, BY DILIGENT HIGH SCHOOL STUDENTS, WHICH THE MEDIA (completely) IGNORED. We take it for granted that the media (including their coverage of education) should ignore the exemplary academic work of HS students, but we also ignore the consequences of doing that.

[height and weight of authors omitted…]

High School History Students”Teach with Examples”The Concord Review reports:

Nathaniel Bernstein of San Francisco, California: Bernstein, a senior at San Francisco University High School, published an 11,176-word history research paper on the unintended consequences of Direct Legislation in California. (Harvard)

Gabriel Grand of Bronx, New York: Grand, a senior at Horace Mann School, published a 9,250-word history research paper on the difficulties The New York Times had with the anti-semitism of the day and also in covering the Holocaust. (Harvard)

Reid Grinspoon of Waltham, Massachusetts: Grinspoon, a senior at Gann Academy, published a 7,380-word history research paper on the defeat of legislation to allow eugenic sterilization in Massachusetts. (Harvard)

Emma Scoble of Oakland, California: Scoble a senior at the College Preparatory Academy, published a 9,657-word history research paper on the Broderick-Terry Duel, which defeated pro-slavery forces in California in 1859. (NYU)

Brief sketch of the problem… was originally published on Nonpartisan Education Blog

Brief sketch of the problem… was originally published on Nonpartisan Education Blog

Brief sketch of the problem… was originally published on Nonpartisan Education Blog

On Writing

First, we stopped demanding that students read anything very challenging in school, and then we stopped holding our teachers or students accountable for the quality of student writing.”
On Writing
National Center on Education and the Economy
By Marc Tucker on January 17, 2014 10:21 AM
 
 
I read a news story the other day that made my heart sink.  It was written by a professor in a business school at a public university.  He told a tale in which his colleagues agreed that the writing skills of their students were miserable, but none would take responsibility for dealing with it.  They were not, they said, writing teachers, and could not be expected to spend time doing what those miserable souls in the understaffed writing labs were expected to do.  This was just as true of the professors in the English department as it was of all their other colleagues.  The author of the article was pretty astute about the causes of that refusal.  Teaching someone to write well takes a lot of time and individual attention, he pointed out.  Professors in university departments are not compensated for that time.  Teaching students to write will take time away from what they need to do to advance in their profession.  And it is not likely to earn them the esteem of their colleagues.  So it was no surprise that his colleagues suggested that the students would be going into a business environment in which presentations were usually done with power points, so maybe the students did not have to learn how to write anyway.  Yes, they said that!
A year ago, my own organization reported on a study we had done of what is required of freshman in their first-year credit bearing courses in a typical community college.  We reported that the texts they are assigned are generally written at an 11th or 12th grade level and the students cannot read them, so their instructors are now used to summarizing the gist of the texts in power points they prepare for their students.  In these circumstances, it is hardly surprising that they assign little or no writing to their students.  They have evidently anticipated the suggestion of the business school faculty I was just quoting that they solve the problem by assuming that their students would not have to write.
But surely, you might be saying, it cannot really be that bad. Oh, but it can.  The attitudes of the college faculty I just reported on are not new.  The departmental faculty might have been prepared in the past to help their students with the technical aspects of writing in their particular field, but they never expected to have to teach basic competence in writing.  They assumed that would be done in our schools.  So what happened?
Two things happened.  First, we stopped demanding that students read anything very challenging in school, and then we stopped holding our teachers or students accountable for the quality of student writing.
I did not learn how to write from a writing manual.  I mostly learned to write by reading good writing, a lot it, some of it fiction, much of it non-fiction. And I had instructors in high school and college who were themselves good writers and took the time to coach me.  My friend William Fitzhugh tells us that very few students are ever asked to read a single non-fiction book from end to end in their entire school career, much less many such books.  More to the point, they are rarely asked to write very much and the expectations for what they do write are, on the whole, absurdly low.
And why is that?  Because we do not hold our teachers accountable for the quality of student writing.  Under prevailing federal law, we hold our teachers accountable for student performance in English, mathematics and, to a minor degree, science.  But the tests we use to hold them accountable for student performance in English typically do not require them to write anything, and, when they do, it is rarely more than a paragraph.  And why is that?  There is only one way to find out if a student can write a well-crafted 15-page essay and that is to ask them to write one.  And, if they are required to write one, someone has to read it.  To make sure that the scores given on the essay are reliable, it may be necessary to have more than one person read it.  That is time-consuming and expensive.  So we talk about English tests, but they do not really test speaking, listening or writing skills. They test reading skills.  The teachers know this, so they don’t waste their time teaching writing, probably the single most important skill we can teach.
It is unclear whether they could if they wanted to.  They could certainly ask students to write more, but most teachers of English do not have the time to do more than skim student written work and give it a global grade and maybe a comment or two.  But that is not going to help a developing writer very much.  Extended coaching is needed, at the hands of a good writer and editor.  And, by the way, we have no idea whether our teachers are themselves good writers, never mind good editors.  Many come from the lower ranks of high school graduates, and those are the same young people whose low writing skills I described at the beginning of this essay.
I have a cognitive dissonance problem.  There is a lot of talk about implementing the Common Core State Standards.  The Common Core calls for much deeper understanding of the core subjects in the curriculum, the ability to reason well, and to make a logical, compelling argument based on good evidence, which in turn requires the student to be able to marshal that evidence in an effective way.  Sounds like good writing to me.
But we talk about implementation of the Common Core as if it can be accomplished by giving teachers a workshop lasting several days and handing them a manual.  I don’t think so.  I would argue that there is no single skill more important to our students than the ability to write well.  Is there anyone who believes that students whose college instructors have discovered that they cannot write will somehow now emerge from high school as accomplished writers because their teacher got a manual and attended a three-day workshop on the Common Core State Standards?  That would qualify as a miracle.
If my analysis is anywhere near right, making sure our students have the single most important skill they will ever need requires us to 1) make sure that our teachers read extensively, write well and have the skills needed to coach others to be good writers; 2) organize our schools so that teachers have the time to teach writing, give students extended writing assignments, read carefully what the students have written and provide extensive and helpful feedback on it (all of which would required major adjustments in teacher load and school master schedules); and 3) change the incentives facing teachers, so that those incentives are based to a significant degree on the ability of students to write high quality extended essays.  If we don’t do that, we are just whistling Dixie.
 

————————–

“Teach by Example”
Will Fitzhugh [founder]
The Concord Review [1987]
Ralph Waldo Emerson Prizes [1995]
National Writing Board [1998]
TCR Institute [2002]
730 Boston Post Road, Suite 24
Sudbury, Massachusetts 01776-3371 USA
978-443-0022; 800-331-5007
Varsity Academics®

 

On Writing was originally published on Nonpartisan Education Blog

On Writing was originally published on Nonpartisan Education Blog

On Writing was originally published on Nonpartisan Education Blog