You are here

Sponsored Content

Developing a Holistic View of Student Success

Taking an institutionwide approach
University Business, December 2018
From left to right: Meghan Turjanica, Product Manager for Student Success, Jenzabar; Mathew Arndt, Lead Data Scientist, Jenzabar; Wade Leuwerke, Associate Professor, Drake University, Co-creator, Jenzabar Student Success Survey.
From left to right: Meghan Turjanica, Product Manager for Student Success, Jenzabar; Mathew Arndt, Lead Data Scientist, Jenzabar; Wade Leuwerke, Associate Professor, Drake University, Co-creator, Jenzabar Student Success Survey.

Many institutions ask their enrollment teams, advising offices, faculty and other departments to focus on student success, but these departments often do not understand all of the unique factors that contribute to the success of their students. A more holistic view is crucial to helping students succeed.

In this web seminar, presenters from Drake University and Jenzabar discussed the importance of considering both academic and nonacademic factors when building a student success plan, how to help each department to understand their role, and when to provide students with additional support.


Meghan Turjanica
Product Manager for Student Success

Mathew Arndt
Lead Data Scientist

Wade Leuwerke
Associate Professor, Drake University
Co-creator, Jenzabar Student Success Survey

Meghan Turjanica: When we’re talking about student success, there are a lot of factors that we want to know about. We know some easy-to-measure data points: academic preparedness, high school GPAs, ACT/SATs and other standardized test scores, scholarship or student loan amount, commitment, registration, involvement. These student success factors provide a great set of data points that we can measure, and in combination, we can often use them to build a predictive model.

Mathew Arndt: The main idea of predictive models is to take known data to predict future outcomes. When you’re thinking about data, you have to first think about the timing of that data. There are a lot of different choices in this regard. At Jenzabar, we often do a ‘beginning of term’ model to make these predictions. This is a proactive approach because you can get those predictions early on, and then can intervene with the students going forward.

Along the same lines, this model produces a retention score, or a prediction probability. This will give you great insights with each of the students. You can rank them, from most risky to least risky. But you also get insights on groups of students and different characteristics of those students.

A data-driven model pulls from several different factors—financial, personal, commitment, academic—all of which play into the predictions. A model is based on the data; the more data you can analyze, the more likely your predictions will be on target.

Wade Leuwerke: When we weave in noncognitive factors, we get a much more holistic view. I encourage you to think about the kinds of attitudes, habits and behaviors that you think are critical for students to be successful.

Here’s our proposition: If these noncognitive factors are related to student outcomes, and if you can measure them, especially before students get there or right at the beginning of the school year, and you can identify strengths and weaknesses, then you can reach out and intervene with your students with a goal of increasing retention.

Does it work? Research clearly shows that these noncognitive factors are important potent predictors of our most important outcomes, both academic performance and persistence in school. They also add new information—they account for additional variance.

The student success survey will kick back a percentile rank. How does that student score in each of the seven scales—academic engagement, academic self-confidence, campus engagement, educational commitment, institutional fit, life roles and resilience. Then we bucket those students into the bottom quartile, the middle 50 percent, and the upper quartile.

Meghan Turjanica: Where do we go from here? First of all, collecting data is not enough. Sometimes it’s important to ask what you’re collecting that data for. Retention rate is just an institution’s measure of success, not necessarily an individual student’s measure of success. So ask yourself what your goals are, and then create your programs around those goals. That means coordinated efforts from all departments.

Also, tell people what you’re doing. If you’re the person in charge of student success on your campus, you need to tell people what you’re doing and why you’re doing it. For example, “We’re giving a survey instrument, and we’re measuring these attitudes, behaviors and motivations, because with this information, we can have smarter interventions with our students.”

Another big thing is to remember to tell people how you’ve done. Celebrate the fact that your retention rate went up 1 percent. That can be huge, and we find sometimes that is not given back to people. A lot of hard work went into that, so make sure to tell people it’s because of what they did.

To watch this web seminar in its entirety, please visit