ACTING OUT: GIULIANI, BADILLO AND MEDIA LASH OUT AT CUNY OVER READING TEST

by Lenore Beaky, LaGuardia

 

PSCcuny
NEWS BULLETIN

APRIL 2001

 

CLICK TO GO TO

PSC Home Page

The CUNY Budget: Moment of Truth

TeachCUNY reaches 18 campuses, 100s of classrooms

Negotiations Update

Letters to the Editor

New PSC Committee on Diversity Begins Work

Health and Safety Update: It's in the Air

New Faculty Speak Out at Brooklyn College

DA Approves Dues Change for Part-Timers

Lights Out for Edison 

Spotlight on Adjunct Concerns at Legislative Hearing

Washington State & California Take the Lead on Adjunct Equity

What the Statistics Say

What the Adjuncts Say

ACTing Out: Giuliani & Media vs CUNY (with bibiliography on testing)

"Teach CUNY" and the Classroom

How Not to Teach at CUNY

The Past Year and the Union's Future

Against Common Sense

 

 

 

 

 

Click HERE for a bibliography on the theory, practice and politics of testing.

‘‘Rudy rips CUNY"; "The Dumbing-Down at CUNY" (NY Post, March 9); "Giuliani Says CUNY Eased Its Standards" (NY Times, March 9); "Dumb & Dumber at CUNY" (Daily News, March 10). And that was just the beginning. A Newsday columnist charged that "CUNY has a history of playing with test results that makes one wonder if it has any standards at all" (March 12), while the Daily News decried "City University, that academic desert" (March 26).

Last month a wave of CUNY-bashing broke out among politicians and the press, focusing on the University’s decision to lower the passing score on the reading portion of the CUNY/ACT tests from 40 to 36. What happened?

The reporters and editorialists demonstrated little interest in or knowledge of the tests themselves, but reading and writing faculty had been pointing out the deficiencies of the CUNY/ACT for over a year. Assessment, if not the "science" that the Mayor’s Task Force dubbed it in 1999, is nevertheless no mystery: clear criteria for effective assessment have been set forth by professionals in reading, writing and testing. The International Reading Association (IRA) opposes "high-stakes testing . . . in which single test scores are used to make important educational decisions." It argues that high-stakes tests narrow the curriculum while overemphasizing the tests themselves. Instead, the IRA advocates "assessments built around…daily educational tasks." Assessment should minimize the role of time constraints and take into account the diversity of the population tested. The National Council of Teachers of English (NCTE) states that "one piece of writing—even it if is generated under the most desirable conditions—can never serve as an indicator of overall literacy." The NCTE adds that "both teachers and students must have access to the results" so that student learning can take place. And finally, since "assessment tends to drive pedagogy. . . it must encourage classroom practices that harmonize with what practice and research have demonstrated to be effective ways of teaching writing and of becoming a writer."

Every one of these criteria is violated or ignored by the CUNY/ACT tests. All three parts of the CUNY/ACT—the multiple-choice reading and writing portions and the essay—are severely time-constrained. Martha Bell (SEEK, Brooklyn College, and the chair of the Reading Discipline Council), told the Times that the CUNY/ACT "tested students’ ability to work quickly rather than to read well."

Another reading teacher suggests that students would do better to go directly to the questions and then search for the answers rather than actually reading the passages. The sample reading selection is written at a 15th-grade level though it is being used as a college admissions and remediation test. A better reading test, says the Discipline Council, would reflect what students do in college classes: analyzing and synthesizing a longer article and answering various types of questions. According to many CUNY faculty, the ACT’s reading selection is poorly written, and the ACT essay rewards formulaic writing that has little to do with college writing tasks. Finally, students receive their CUNY/ACT results as numbers only—even instructors are forbidden by ACT, Inc. to look at their exams; therefore, no learning can take place, except perhaps how to better take the test.

Once a particular test has been selected, those using it must establish its validity and reliability. Bill Crain, Professor of Psychology at City College, explains that "for the ACT, which determines whether a student may begin freshman composition, the most important kind of validity is predictive validity. Does the ACT actually predict success in freshman composition courses at CUNY?" But Crain observes that "CUNY’s central administration hasn’t reported any evidence that the ACT is valid at CUNY. It should have taken at least a year to gather and evaluate the evidence before implementing the test." CUNY has never made public the basis on which the original cut scores for the tests were chosen.

If the CUNY/ACT tests are so deficient, why are we using them? In summer 1999, Mayor Giuliani had threatened a cutoff of City funding unless the University implemented "common objective tests reflecting national norms" to demonstrate student readiness for college-level work. After a State Supreme Court judge determined that this threat was illegal, the CUNY Board of Trustees nevertheless passed a resolution mandating the adoption of such tests and requiring their implementation by spring 2000.

The CUNY Chancellory established an advisory board consisting of faculty and administration to select a vendor for the tests. The faculty believed that the vendor they had selected, ACT, Inc., would work with them to design tests appropriate for CUNY’s uniquely diverse students. Instead the Chancellory, citing the need for nationally-normed tests, allowed ACT to present to CUNY its ASSET multiple-choice tests, as well as an essay test using ACT standard topics. The Chancellory organized a brief and incomplete pilot-testing procedure, which lasted a few months.

Both the CUNY Reading and English Discipline Council faculty groups, as well as the University Faculty Senate, protested the exclusion of faculty from meaningful participation and asked that implementation of the CUNY/ACT tests be postponed so that valid data could be gathered to support a more careful process. Reading faculty never saw a complete practice test in advance of its administration, and received even CUNY’s incomplete practice materials only a week before the test’s administration. Promised faculty development activities from CUNY never materialized. Nevertheless, insisting that its timetable be followed, the Chancellory forced students registered in the highest-level ESL and remedial writing and reading courses to take the CUNY/ACT tests in December 2000.

Only 27% of students taking the reading skills test met the cut score of 40 (out of a total of 53 possible points), so in January Executive Vice Chancellor Louise Mirrer lowered that score to 36. Admitting that the ACT tests had been normed nationally on a population consisting of only 10% ESL speakers versus CUNY’s local population of 30%, and now finally conceding "the ongoing need to assess the predictive validity of the instrument using criteria such as course grades," Mirrer indicated on January 17 that the original cut score of 40 would be "phased in during the next few semesters." As a result of the change, 60% of students now passed the test and were eligible (if they also passed the writing essay) to register for college composition.

The media first took note of the CUNY/ACT almost three months after the test was given, in a March 7 New York Times article, "The Pitfalls of Make-or-Break Tests" by Karen Arenson. Apparently, this was also the first time that Mayor Giuliani and Board of Trustees Chairperson Herman Badillo were hearing of the testing results and scoring change. They weren’t pleased. The Mayor said he was "very disappointed"; a Post news story reported Giuliani "doesn’t want CUNY to slip back to the old days of virtually no standards." Badillo told the News that the change was "contrary to the policy of the board."

Pundits rushed to join the condemnation, without much attention to the facts. A March 10 editorial in the Daily News asked indignantly, "What would you consider a reasonable passing grade on a college-level test? Sixty? Maybe 50? How about 36?" and denounced even the original cut score of 40 as "already ridiculously low." But the News had made an error in arithmetic that can only be called ironic, considering that the subject is basic skills: the cut scores of 36 and 40 are absolute numbers, not percentages. It would be impossible to get a score of 60 on the CUNY/ACT reading test, since it has a maximum of 53 points. A score of 40 means that 75% of questions were answered correctly, while a score of 36 adds up to 68%.

Editorial boards and columnists mocked CUNY students and administrators alike, brushing aside the explanation of Chancellor Goldstein that the adjustment was "all about the validity of the test," the assertion of Vice Chancellor Mirrer that "adaptively norming a new test is common and responsible." No commentator asked why considerations of validity and careful norming hadn’t been raised before the test was implemented, rather than after. And none even asked whether the battery of CUNY/ACT tests was educationally valid in the first place.

Might students see valid tests in the near future? It seems unlikely. Chairperson Badillo, "working with officials at City Hall and Albany, said he is exploring using a private firm to design, administer and grade the test" (Daily News, March 23) in order to exclude from participation in the process all CUNY faculty members and administrators. Badillo went so far as to suggest that their total removal was the only way to eliminate the possibility that someone would "cook the books." Executive Vice Chancellor Mirrer indicated plaintive puzzlement at this proposal: "We have absolutely followed everything the board and the mayor’s [CUNY] task force have asked us to do."

Bibliography on the theory, practice and politics of testing

ACT. http://www.act.org/

Bracey, Gerald W. Thinking About Tests and Testing: A Short Primer in "Assessment Literacy." Washington, DC: American Youth Policy Forum, 2000. http://www.aypf.org/BraceRep.pdf

CUNY Community College Conference. The Mismeasure of Students: The Case Against the CUNY/ACT. April 2001.

FAIRTEST. http://www.fairtest.org/

Hartman, Joan E. "Accountability, Testing, and Politics." Profession 1999. NY: MLA, 1999.

Heubert, Jay P. "Graduation and Promotion Testing: Potential Benefits and Risks for Minority Students, English-Language Learners and Students with Disabilities." Poverty & Race. Poverty and Race Research Action Council, September/October 2000. 1-2, 5-7.

"High-Stakes Assessments in Reading." International Reading Association. August 1999. http://www.reading.org/advocacy/policies/high_stakes.htmL

Kohn, Alfie. The Case Against Standardized Testing: Raising the Scores, Ruining the Schools. Westport, CT: Heinemann, 2000.

McNeil, Linda and Angela Valenzuela. "The Harmful Impact of the TAAS System of Testing in Texas: Beneath the Accountability Rhetoric." The Civil Rights Project, Harvard University. http://www.law.harvard.edu/civilrights/conferences/testing98/drafts/
mcneil_valenzuela.html

Sacks, Peter. Standardized Minds: The High Price of America’s Testing Culture and What We can do to Change It. Cambridge, MA: Perseus Books, 2000.

White, Edward M., William D. Lutz, and Sandra Kamusikiri, eds. Assessment of Writing: Politics, Policies, Practices. NY: MLA, 1996.