Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tech&Learning
Tech&Learning
Technology
Erik Ofgang

Colleges Are Using AI To Predict Student Success. These Predictions Are Often Wrong

An illustration of a circuit.

When it comes to predicting college success, AI algorithms may have some work to do.

Predictive AI technologies are commonly used by colleges and universities, but recent study found these are more likely to incorrectly predict failure for Black and Hispanic students than white students. These tools are also less likely to correctly predict success for students of color than white students.

Here’s a closer look at the research that appeared in July in the AERA Open, a peer-reviewed journal of the American Educational Research Association.

What The Researchers Found About AI Algorithms Ability to Predict Student Success

“Our study was motivated by the increasing use of machine learning (ML) and artificial intelligence (AI) for college and university operations, including to enhance student success outcomes,” says Denisa Gándara, a co-author of the study and an assistant professor in the College of Education at the University of Texas at Austin. “Colleges and universities leverage ML to predict student success outcomes, such as persistence and graduation rates. They do this for various reasons, including to make admission decisions or to target interventions.”

Gándara adds that her co-authors and her were inspired to undertake this study because research from other fields such as healthcare and criminal justice had previously revealed a bias against socially marginalized groups.

“In higher education, such biases could exacerbate existing social inequities, influencing crucial decisions like admissions and the allocation of student support services," she says.

The models Gándara and her colleagues studied inaccurately predicted a student wouldn’t graduate 12% of the time if the student was white and 6% percent of the time if the student was Asian. If the student was Hispanic, the models were wrong 21% of the time, and if the student was Black, it was wrong 19% of the time.

A similar pattern followed for predicting success. The models incorrectly predict success for white and Asian students at 65% and 73% respectively, compared to just 33% of the time for Black students and 28% for Hispanic students.

How Was The Research Conducted?

For the study, researchers tested four predictive machine-learning tools that are widely used in higher education. The tools were then trained on 10 years of data from the U.S. Department of Education’s National Center for Education Statistics, including 15,244 students.

“We used a large, nationally representative dataset that includes variables that are commonly used to predict college student success,” Gándara says. These include demographic and academic achievement variables.

The researchers used 80% of this data to train the models and then tested each model’s ability to make predictions with the remaining 20% of the data. The process likely meant the models were being fed better training data than many colleges are using.

“Schools and colleges typically use smaller administrative datasets that include data on their own students,” Gándara says. “There is wide variability in the quality and quantity of data used at the institution level.”

What Is The Impact Of This Research

Given how entrenched AI tools are becoming in education, the implications of this research are significant. “Prediction models are not confined to colleges and universities — they are also widely used in K-12 schools,” Gándara says. “If predictive models are used in college admissions decisions, racially minoritized students might be unfairly denied admission if the models predict lower success rates based on their racial categorization.”

Beyond admissions, these inaccuracies can create other problems. “In K-12 or higher education, there is a risk of educational tracking, where students from racially minoritized groups are steered toward less challenging educational trajectories,” Gándara says.

In addition, inaccurate predictions around success might lead to schools or educators providing greater resources to students who may not need those interventions.

“The evidence of algorithmic bias and its varied implications underscore the importance of training end users, including admissions officials, academic advisors, and faculty members or teachers, on the potential for bias,” Gándara says. “Awareness of algorithmic bias, its direction, and the groups affected can help users contextualize predictions and make more informed decisions.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.