The power—and responsibility—of determining who is likely to enroll.

GUEST COLUMN | by Ardis Kadiu

In 1992, Ronald F. Urban, director of institutional research at Whitman College, proposed an innovative approach to increase admission yields.

Urban suggested adopting a strategy similar to political campaigns that allocate resources towards areas with more swing voters. The idea being that there’s a better chance of gaining a swing voters’ vote than the vote of someone set on another candidate.

Urban provided a predictive equation for identifying which students — like swing voters — are more likely to enroll in one school over another. This launched the predictive modeling techniques colleges and universities use today.

A lot has changed since Urban’s original proposal. New technologies, better and more varied sources of data, as well as machine learning, have opened the door for even greater insights for the admissions professional.

New technologies, better and more varied sources of data, as well as machine learning, have opened the door for even greater insights for the admissions professional.

At the same time, admissions and enrollment deans must contend with very real concerns about how prospective students’ data is collected and used. They also need to be ahead of unintended consequences of data-driven enrollment strategies.

Scoring Students by “Fit”

Predictive models start by looking at characteristics of current and graduated students. These include things such as high school GPA, SAT and ACT scores, academic interests, financial need, geographic location, ethnicity, and extracurricular activities.

Next, software compares historical student data to data about prospects. Schools typically acquire prospect information from testing organizations like the College Board. Students opt-in to be included.

When prospect data is incomplete, it’s possible to enrich it with other sources such as geolocation, household income, and consumer affinity groups.

The desired outcome of applying or enrolling (y) is modeled as a function of the various characteristics (x). You may recall using a simplified version of this equation in high school mathematics: y=f(x)  

Finally, a score is generated that indicates how likely it is that a prospect will apply or enroll. The score represents the “fit” of the prospect to past applicants and enrollees. Admissions departments then focus marketing dollars towards prospects with the highest fit scores.

To arrive at a sound result, it’s paramount that historical data is rigorously prepared for modeling.

For example, using data that under- or over-represents ethnic or socioeconomic populations might steer schools towards a strategy that shuts out other groups. Schools need to recognize potential bias in their data and account for it. One way is to give the model a more evenly representative sample.

While fit scores serve admissions teams well, until recently there has been a missing piece of the prospect profile: behavior.

Scoring Students by Behavior

Behavior refers to the ways prospects interact with a school’s communications, its website, or even competing schools’ sites. Examples include filling out a request for information form, clicking a link in an email, starting an application, unsubscribing from text messages, and web browsing habits.

Added up, these behaviors provide a much richer understanding of how likely a prospect is to enroll than fit alone.

The capacity for software to analyze behavior data in real time is growing. This means that admissions teams can soon have predictive scores that update every hour rather than at the start of an admissions cycle.

And with machine learning, it is possible to improve predictive models by adjusting the influence of behavior variables to match their significance. This results in more accurate scores and insights on how to better craft messages and communications, and even when and how to deliver them.

Stealth No More

Another area where technology is helping admissions teams is “stealth applicants.” These are students who apply to a school but aren’t part of the prospect pool. Because they aren’t known until they apply, it’s difficult to track information that’s useful for future recruiting.

When and how did they first interact with the school?

Do they enroll more or less than “known” prospects?

Using “identity stitching,” engineers getting better at finding out who stealth applicants are.

An example:

A stealth student starts an application. They receive an email encouraging them to explore the website. They click a link in the email that takes them to the website. That click provides data that can be used to retrospectively stitch the now-known student to anonymous visits they made to the school’s website in the past.

Privacy and Transparency

While this sounds like a dream come true for an admissions dean pressed to meet enrollment goals, it should rightly set off alarm bells for anyone concerned about privacy (that dean included).

With the General Data Protection Regulation (GDPR) going into effect in the European Union, and greater scrutiny on how companies like Facebook handle user data, schools need to be more transparent about the information they collect.

Working with their technology partners, for example, they should update website privacy policies and provide ways for students to see what data has been collected.

If they want it deleted, they should have an easy means to do so.

And if predictive models are used in making admissions or financial aid decisions, students should be able to see how the model arrived at the decision.

Ardis Kadiu is the CEO of Element451, creators of the admissions marketing and CRM platform by the same name. He is also a principal of Spark451, a higher education enrollment strategy, technology, and marketing firm that combines creativity with powerful technology to achieve measurable results. Ardis has helped schools use technology to optimize their enrollment activities for more than a decade.