Predictor/performance ratings break new ground

In discussing intervention programs, officials often point out the University's decentralized nature: Institution-wide goals, they note, must be accomplished through initiatives at many levels. One player in this conglomerative effort is Associate Professor of Biomedical Engineering Monty Reichert, who has long been interested in the challenges facing minority engineering students.

He has investigated black engineering students' elevated attrition rate into Trinity and studied mentoring interventions during a sabbatical at North Carolina Central University.

One of Reichert's more recent projects has gained the attention of many administrators; the President's Council on Black Affairs is likely to use his study of "predictor/performance ratings" in its "dashboard indicators" project (see story, page 1).

In this study of Trinity graduates in the mid-'90s, Reichert used the "reader rating sums" developed by the Admissions Office to score applications and related them to students' GPAs at graduation. Reichert found that, in the aggregate, reader rating sums correlate very closely to final GPAs. In other words, reader rating sums are an excellent predictor of eventual GPAs.

But when the data is broken down according to race, black students in each of Reichert's quintiles of reader rating sums consistently graduate with GPAs almost 0.5 below the scores predicted for white students with similar reader rating sums. The data for Hispanic students is statisticly insignificant; Asian-American students perform close to average. Simply, at all academic levels, black students graduate with a lower GPA than white students admitted with the same reader rating sum.

This study, Reichert says, explodes the argument that black students underperform because they are admitted through lower standards. It suggests instead that black students have a different experience at the University which leaves them with GPAs below what would be predicted by the otherwise reliable reader rating sums.

"These data are proving to be important nationwide," noted Vice Provost Judith Ruderman. "What Monty found on this campus is true on other campuses as well. Without data, we have no solid ground for identifying-much less addressing-challenges."

Admissions Director Christoph Guttentag stressed that Reichert's research is independent of the Admissions Office, which never uses reader rating sums to predict students' GPAs. His office sums reader ratings across six categories-SAT scores, extracurricular activities, academic accomplishment, rigor of high school, application essays and recommendations-to help make admissions decisions. Guttentag noted that he would have no reason to think that black students' reader rating sums are inflated, a possibility that Reichert pointed out as an alternative explanation for black students' apparent underperformance.

Reichert said he hopes that this data will be used as an empirical justification for intervention, in the same way PCOBA members hope to use their "dashboard indicators."

"At a private university, faculty have to buy into the goal, and although most think that diversifying the profession is a good idea, they need to be shown why and how," he said, noting that faculty can contribute to the effort through initiatives such as the letter he co-wrote in support of black faculty in the fall of 1998.

He added that he thinks President Nan Keohane has created an environment in which people can effect change. Now, he said, they need to take advantage of it.

Ruderman said such work is in progress. "At Duke, process and products are quite decentralized, but in this case, the pieces will fit together like a jigsaw puzzle, providing a picture of our campus so that we can be more focused on improving it," she said.

Discussion

Share and discuss “Predictor/performance ratings break new ground” on social media.