Next Article in Journal / Special Issue
Processes Underlying the Relation between Cognitive Ability and Curiosity with Academic Performance: A Mediation Analysis for Epistemic Behavior in a Five-Year Longitudinal Study
Previous Article in Journal
A New Perspective on Assessing Cognition in Children through Estimating Shared Intentionality
Previous Article in Special Issue
The Role of Intelligence and Self-Concept for Teachers’ Competence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Student Characteristics, Institutional Factors, and Outcomes in Higher Education and Beyond: An Analysis of Standardized Test Scores and Other Factors at the Institutional Level with School Rankings and Salary

1
Department of Education Reform, University of Arkansas, Fayetteville, AR 72701, USA
2
Department of Psychology, University of Arkansas, Fayetteville, AR 72701, USA
*
Author to whom correspondence should be addressed.
J. Intell. 2022, 10(2), 22; https://doi.org/10.3390/jintelligence10020022
Submission received: 19 January 2022 / Revised: 21 March 2022 / Accepted: 24 March 2022 / Published: 1 April 2022
(This article belongs to the Special Issue Intelligence, Competencies, and Learning)

Abstract

:
When seeking to explain the eventual outcomes of a higher education experience, do the personal attributes and background factors students bring to college matter more than what the college is able to contribute to the development of the student through education or other institutional factors? Most education studies tend to simply ignore cognitive aptitudes and other student characteristics—in particular the long history of research on this topic—since the focus is on trying to assess the impact of education. Thus, the role of student characteristics has in many ways been underappreciated in even highly sophisticated quantitative education research. Conversely, educational and institutional factors are not as prominent in studies focused on cognitive aptitudes, as these fields focus first on reasoning capacity, and secondarily on other factors. We examine the variance in student outcomes due to student (e.g., cognitive aptitudes) versus institutional characteristics (e.g., teachers, schools). At the level of universities, two contemporary U.S. datasets are used to examine the proportion of variance accounted for in various university rankings and long-run salary by student cognitive characteristics and institutional factors. We find that depending upon the ways the variables are entered into regression models, the findings are somewhat different. We suggest some fruitful paths forward which might integrate the methods and findings showing that teachers and schools matter, along with the broader developmental bounds within which these effects take place.

1. Introduction

When seeking to explain the eventual outcomes of a higher education experience, do the personal attributes and background factors students bring to college matter more than what the college is able to contribute to the development of the students through education or other institutional factors? A long line of work within the fields that study cognitive reasoning and aptitudes would suggest that student background characteristics, especially cognitive aptitude, is important to outcomes not only within college but well beyond it (e.g., Brown et al. 2021; Deary et al. 2007; Schmidt and Hunter 2004), whereas other fields, such as the those that specifically study higher education, emphasize more the role that various factors attributable to the institution might make in the performance and eventual achievement of students (e.g., Light 2001; Stinebrickner and Stinebrickner 2007).
Ultimately, it is very hard to disentangle student and background factors from what a college adds, and, in part, all studies on education depend on the factors and methodological approach a researcher wishes to emphasize (e.g., Huntington-Klein 2020; Schlotter et al. 2011; Singer 2019). Most education studies tend to simply ignore cognitive aptitudes and other student characteristics—in particular the long history of research on this topic—since the focus is usually on trying to assess the impact of education, and so the role of student characteristics has in many ways been forgotten, or perhaps even ignored (Maranto and Wai 2020). Conversely, educational and institutional factors are not as prominent in studies focused on cognitive aptitudes or abilities, as these fields focus first on reasoning capacity (Hunt 2009), and perhaps secondarily on the contribution of other factors.
In this paper, we study student characteristics and institutional factors at the level of institutions to assess whether the findings align with the past studies on cognitive aptitudes and student characteristics, but also to examine the role of institutional or other factors. The aim of our paper is not to clearly adjudicate between the two different perspectives but to consider how our approach provides a way of thinking about this problem in different ways. We first provide a historical overview focused on the role of cognitive aptitudes, as this contribution adds to that line of work, and it also illustrates that this ongoing discussion surrounding what factors matter for education has a thread that can be traced back decades. We emphasize that our view on cognitive abilities and aptitudes is that they are developed and that education is both a product of cognitive aptitudes and can enhance cognitive aptitudes (Hair et al. 2015; Lohman 1993; Ritchie and Tucker-Drob 2018; Snow 1996).

2. Brief Historical Review Focused on the Role of Cognitive Aptitudes

The importance of cognitive aptitudes for life outcomes has been widely replicated across the decades in numerous longitudinal samples globally (e.g., Brown et al. 2021; Deary et al. 2007). Developed cognitive aptitudes are especially important for learning in schools (Detterman 2016; Snow 1996) and for educational outcomes (e.g., Brown et al. 2021; Deary et al. 2007). Though the typical approach to studying the basic science of cognitive aptitudes is not to consider its role in applied or historical contexts, sometimes it is within such contexts that a greater understanding of how and where basic science may or may not hold is obtained. In this paper, therefore, we examine the role of student cognitive aptitude in education by examining the pattern of correlations and variance explained by cognitive aptitude with educational and occupation-related outcomes, and, conversely, the variance explained by institutional factors. First, we provide a historical review of studies at both the individual and group level—illustrating the replication of findings across the very different disciplinary perspectives of cognitive aptitude research and education policy research—and then introduce our empirical study focused on the aggregate level of colleges and universities.
In a landmark U.S. educational report (Coleman et al. 1966), data were collected on multiple grades and over 4000 public schools on aptitude and achievement test scores of students, along with surveys of schools and students for a total sample of over 645,000. This report uncovered that about 10% to 20% of the variance in student achievement was due to schools, about 80% to 90% was due to student characteristics themselves, and teacher quality accounted for about 1% of the variance (Detterman 2016). This report initiated a national discussion, and much educational research since that time has investigated whether these findings are replicable. This study was on a representative U.S. sample at the time of K-12 students and schools. Do these findings hold for samples at different points in time, for individual and aggregate samples, for studies using different methods, and for studies in different countries? And do these findings hold not only in K-12 education, but also in higher education?
Over the following half century, reviews of the findings of what has come to be known as the Coleman report have largely confirmed them (e.g., Detterman 2016; Gamoran and Long 2007; Jencks et al. 1972). Jencks et al. (1972) replicated the finding that much of the variance in student achievement was due to students, and a 40-year follow up of the Coleman report, which included data on developing countries, Gamoran and Long (2007) found that in countries with a per capita income above $16,000 the findings were replicated.

2.1. Using Other Methods: Twin Studies and a Natural Experiment

Other than large sample randomized controlled trials (RCTs), studies of twins are able to account for endogenous factors such as genetics in the estimation of how much of the variance in student achievement is due to students versus teachers or schools in education research (Asbury and Wai 2020; Byrne et al. 2010; Hart et al. 2021). In a recent study examining classroom-level influences on literacy and numeracy among twin samples in the U.S. and Australia (Grasby et al. 2019), the classroom accounted for about 2% to 3% of the variance in achievement. These authors cautioned that although these averaged results may be a lower bound estimation, and that their design could not detect classroom influences at the level of the individual student, their estimate was at odds with much of the global public discourse focused primarily on the influence of teachers and the classroom.
An unusual opportunity for a natural experiment arose in World War II, due to the city of Warsaw, in Poland, being destroyed. The government assigned residents randomly in the newly reconstructed city. Firkowska et al. (1978) collected general cognitive aptitude data (Raven’s matrices) in addition to parent education and occupation for most of the students born in 1963 in Warsaw. When breaking down the variance in Raven’s scores due to district, school, and family characteristics, the authors found that the variance due to schools was about 2.1%. Thus, this estimate is right in line with the twin studies. Though this was an unusual natural experiment, it should be noted that at least for most rigorous large sample educational RCTs in the U.S. and U.K., these studies tend to find very small or uninformative effects that are typically much smaller than the literature that does not typically randomize (e.g., Lortie-Forgues and Inglis 2019; Sims et al. 2020).

2.2. Estimates of the Teacher’s Contribution to Student Achievement

Studies using K-12 student-level administrative data in the U.S. on a sample of about 23 million students in the states of Florida and North Carolina across the decade studied (Chingos et al. 2014; Whitehurst et al. 2013) were able to estimate the proportion of variance in student achievement on test scores due to teachers at about 4% to 6.7%, due to schools at about 1.7% to 3%, due to districts at about 1.1% to 1.7%, and due to superintendents at about 0.3%. This shows that—at least when ignoring the contribution of students (and related background factors) to student achievement—teachers appear most influential, followed by schools, districts, and superintendents.

2.3. Estimates of Teacher and School Effects Using Methods Focused on Forward Causal Inference

This tendency of education research to neglect the contribution of students to student achievement is probably due, in large part, to the focus of the education research community on what variables they think they can change in the educational environment of the student (e.g., Schlotter et al. 2011; Singer 2019). We should note that, up to this point in our brief review, the focus has been on studies at both the individual and group level that examine the proportion of variance accounted for by students, teachers, and schools. Additionally, we have summarized studies by treating ability and achievement tests as somewhat interchangeable, but there are debates around what is measured by large-scale international assessments such as PISA with regard to cognitive aptitudes versus learning outcomes (e.g., Baumert et al. 2009; Engelhardt et al. 2021; Rindermann 2007).
Gelman and Imbens (2013) explained that reverse causal questions are questions about the unknown causes of known effects, whereas forward causal inference requires estimating the unknown effects of known causes. Thus, in the literature reviewed so far, we are estimating the known variance proportions accounted for by student, teacher, and school sources without having a research design that can tell us what are the specific causes. However, forward causal questions would take the form of something like “What is the causal effect of having an effective teacher for one year on students’ academic outcomes?” (Wai and Bailey 2021), and to answer such a question policy researchers might use a random or quasi-random assignment of students to different teachers and assess the impact of this on outcomes. Much of the time outcomes are changes in test scores in the short run (for a review see Goldhaber 2015), but sometimes the effects of teachers can persist for years, such as on earnings (e.g., Chetty et al. 2011, 2014), and the differential effects of schooling environments can also influence short- and long-run outcomes (e.g., Atteberry and McEachin 2020; Chetty et al. 2014; Dynarski et al. 2013; Wolf 2019).
Thus, the approach taken in this paper focused on reverse causal questions, providing some of the fuzzy boundaries around expectations of what teachers or schools might be able to contribute to the eventual achievement of students, but it does not necessarily take away from the utility of teachers and schools in improving student achievement and outcomes within reasonable bounds. The largest threat in most approaches from the economics of education is selection bias, which is even stated by education economists and policy researchers as cognitive abilities (e.g., Schlotter et al. 2011). For example, if students with higher developed cognitive aptitude are selected into a given program, it becomes unclear whether that higher aptitude, the program, or something else is causing later outcomes for those students. It makes sense for integrative understanding to use both or even additional approaches as complimentary tools to understanding the role of students, teachers, and schools in student achievement and what interventions may be cost-effective and beneficial relative to counterfactuals.

2.4. Estimates of the Students’ Contribution to Student Achievement

Up to this point we have focused on reviewing studies looking at the variance in student achievement accounted for by schools or teachers, but Coleman et al. (1966) estimated that roughly 80% to 90% of student achievement variance was due to students and related background factors (Detterman 2016). What about studies that estimate the student’s contribution to student achievement? Deary et al. (2007) examined 13,248 English school children who were tested on The Cognitive Abilities Test at age 11 and took General Certificate of Secondary Education (GCSE) tests around age 15 or 16. The correlation between the academic achievement general factor and the cognitive aptitude general factor was 0.81. Kaufman et al. (2012) examined 2520 participants who took the Kaufman intelligence and achievement tests and 4969 participants who took the Woodcock–Johnson intelligence and achievement tests. The overall average correlation between the academic achievement general factor and the cognitive aptitude general factor was 0.83. Thus, in both these studies in the U.K. and U.S., respectively, general cognitive aptitude accounted for roughly two-thirds of the variance in academic achievement.

2.5. Higher Education

So far, we have reviewed findings in K-12 education. But what about higher education? At the individual level, Angoff and Johnson (1990) used a sample of 7954 students from 292 institutions who had taken the SAT and then about a half-decade later had taken the GRE. They used SAT math, college major, and gender and were able to predict 93% of the variance in GRE math scores. This means that roughly 7% of the variance in student achievement could be attributable to the institution the student attended. Additionally, Dale and Krueger (2002) examined the role of the selectivity of the institution in impacting long-run earnings using large samples and controlling for multiple confounders. Overall, once the SAT of the school was accounted for, there was no connection between the selectivity of the institution attended and long-run earnings overall. Taken along with arguments from other scholars that the value of higher education may not be so much about the institution one attends (e.g., Caplan 2018; Wolf 2003), this provides similar findings at the level of higher education as reviewed for K-12 education.
Value-added in higher education. As more attention has been drawn to accountability and transparency in higher education in the past decade, many researchers turn to using the value-added methodology to determine the aspects that higher education may add to economic opportunities (Roohr et al. 2021; Kulkarni and Rothwell 2015). However, there are certain challenges in measuring value-added in higher education, particularly using administrative data (Cunha and Miller 2014). Such challenges include the lack of year-on-year standardized tests, the lack of longitudinal student-level outcomes, concerns of self-selection into college and university, and the mismatch between students’ specialization and outcome measures. Cunha and Miller (2014) proposed a simple model to estimate the value-added of individual institutions that include pre-enrollment characteristics, unobserved differences in student’s profile and preferences captured by applications and acceptances, and fixed effects for the college they enrolled in. In our current study, unfortunately, we do not have access to student-level characteristics. Instead, we focus on institutional-level characteristics.

3. This Study

For this study, we link the higher education literature with the cognitive aptitude literature by examining the proportion of variance accounted for by students versus institutional characteristics at the level of colleges and universities in the U.S., at least to the extent that standardized test scores such as the SAT or ACT can be used to tap such student characteristics. Before describing our specific research design and questions in more detail, we explain our perspective on the measurement of student cognitive characteristics that helps unify and integrate the findings that have come from various disciplinary perspectives. The key is the measurement of student cognitive characteristics, in particular the measurement of general cognitive aptitude.

3.1. Measurement of Student Cognitive Characteristics

The measurement of student cognitive characteristics, in particular through tests or assessments aimed at measuring cognitive aptitudes and their use in the selection of various kinds, has a long history (Binet and Simon 1905; Spearman 1904; for reviews, see Detterman 2016; Thorndike and Lohman 1990). Even as early as 200 B.C., for example, the Chinese arguably selected for cognitive aptitude through the use of Civil Service Examinations, and even today, the gaokao, or national college entrance examination in China, is viewed as a measure of student cognitive aptitude (Li et al. 2012). Though there are multiple cognitive aptitudes, a general working consensus around the hierarchical model of cognitive aptitudes has emerged that recognizes general cognitive aptitude at the apex along with more narrow aptitudes below that (Carroll 1993).
There is also extensive research on the overlap between aptitude and achievement tests, and, in fact, Kelley (1927; c.f. Coleman and Cureton 1954, p. 347) introduced the idea of the jangle fallacy as “the use of two separate words or expressions covering, in fact, the same basic situation, but sounding different, as though they were in truth different,” referring to the significant measurement overlap between group cognitive aptitude tests and school achievement tests. Indeed, research has shown that cognitive g and academic achievement g are roughly the same from a measurement standpoint (Deary et al. 2007; Kaufman et al. 2012), that g is measured by nearly any challenging cognitive test with a diversity of tests and item types (e.g., Chabris 2007; Ree and Earles 1991), and that even when test designers intended to measure other aptitudes and achievements, g is uncovered (e.g., Johnson et al. 2004; Schult and Sparfeldt 2016; Steyvers and Schafer 2020). Given this broadly replicated finding, it should come as no surprise to those who acknowledge the body of research on cognitive aptitudes that both the SAT and ACT have largely been found to be measures of g (e.g., Frey and Detterman 2004; Koenig et al. 2008). We should make clear that we are discussing here a very specific, yet central, dimension of student characteristics, that such characteristics can encompass cognitive, noncognitive, and other attributes associated with the student (e.g., Wai and Lakin 2020), and that we view these attributes as developed. As Detterman (2016) puts it, student characteristics can be broadly characterized as things that go with the student when they leave a school, which include aspects associated with income and parental education level (Hair et al. 2015).

3.2. Analytic Plan

We build upon this body of work that spans decades and different disciplinary approaches by examining, at the college or university level in the U.S., the proportion of variance accounted for in various college rankings and early to mid-career salary by student characteristics as indicated by SAT or ACT scores, as well as various institutional factors. We draw from two longitudinal databases at two different points in time which measured these factors somewhat differently. The first database was drawn largely from the U.S. News & World Report, along with salary data collected by PayScale. Both sources date from 2014. The second database was drawn from College Scorecard data in 2017–2018 (U.S. Department of Education College Scorecard 2017–2018). Broadly, we seek to examine what proportion of variance student characteristics (as indicated by general cognitive aptitude) account for in typical college and university outcomes, such as rankings and salary, and also to estimate, after cognitive aptitude is taken into account, what proportion of variance in rankings and salary remain for institutional factors to account for among the explainable variance. We also take the flipside perspective and examine the role of what cognitive aptitude adds after accounting for a wide range of institutional factors. We use these two datasets along with Lykken’s (1968) approach of constructive replication—the idea of preserving focal constructs in each database but varying construct-irrelevant aspects—to investigate whether findings replicate across the two datasets, and also across the decades of literature reviewed at multiple levels of education.

4. Method

4.1. Data and Analytic Sample

We use two datasets for this study at different time points and measurement of different outcomes to attempt to see if the findings replicate. The first dataset was compiled in 2014 from the U.S. News website using a premium account for full access as well as public data from PayScale. The second dataset was drawn from the College Scorecard database from 2017–2018. This dataset is free and available to access and download via https://collegescorecard.ed.gov/data/ (accessed on 23 March 2022). Table 1 shows each of the comparable variables used in this study, which were purposefully selected to represent student (i.e., SAT or ACT scores) and various institutional factors, of which we discuss how we selected for inclusion in the next section. After matching all observations by university names, we had a total of 1271 universities and colleges in the College Scorecard dataset in 2017–2018, and 883 universities and colleges in the U.S. News dataset.

4.2. Variables

Student characteristics. We used average SAT and ACT scores at the institutional level as a proxy for students’ average general cognitive aptitude level (e.g., Frey and Detterman 2004; Koenig et al. 2008; see Table 1 for a description of variables). As in prior work (e.g., Wai et al. 2018), for the U.S. News reported scores this average was computed by translating ACT scores to SAT scores using a conversion table and then taking an average of the 25th and 75th percentile scores (what universities report to U.S. News) to create an SAT average for all schools with data. For the College Scorecard database, an SAT average which was already computed was used.
Outcomes. We used average income/salary at early and mid-career points at the institutional level as a proxy for short-term and long-term outcomes of students, as well as university rankings on various measures (see Table 1). College and university rankings are conducted by numerous publications seeking to quantify differences in quality between schools in diverse ways. We drew from rankings data in prior work (Wai et al. 2018) looking at U.S. News national university and U.S. News liberal arts college rankings, a critical thinking ranking (using a measure of critical thinking known as the CLA+), a Lumosity brain games ranking which included data from different colleges and universities whose students had played their brain training games, Times Higher Education (THE) world and U.S. rankings, and a revealed preference ranking (Avery et al. 2013, p. 425), which ranked schools based on “the colleges students prefer when they can choose among them”. Income/salary is a clear and objective occupational outcome metric which is often used in evaluating the role of higher education but has also been linked to cognitive aptitudes (Brown et al. 2021; Judge et al. 2010; Schmidt and Hunter 2004). In this study, we could only use the THE U.S. ranking and the Lumosity brain games ranking in our analysis with sufficient sample size (N~200), where both institutional factors and student cognitive characteristics were examined.
Institutional characteristics. Our institutional-level variables included data on tuition and fees, admission rate, university resources, cost of attending (including room and board), and diversity (see Table 1). The role of tuition and fees and the overall cost of attending university may matter for students in terms of the time they spend studying versus the time they must work in addition to studying. For example, some studies suggest students that attend colleges with higher tuition are more likely to work while studying (Neill 2015). However, whereas Light (2001) suggested that working while studying yielded higher future wages, Stinebrickner and Stinebrickner (2007) noted that additional study time was associated with higher academic performance. Given that work-study and additional study time may come into conflict with one another, it is unclear how tuition and the cost of attending may affect student outcomes with significant confounding factors coming from students’ choice of tracks to complete their degrees (Neyt et al. 2018), in addition to student cognitive aptitudes, which predict numerous long-term outcomes throughout life (e.g., Brown et al. 2021; Deary et al. 2007; Schmidt and Hunter 2004).
School facilities and intellectual resources as well as quality are proxied by endowment, number of faculty, faculty–student ratio, enrollment, and admission rate, though it is unclear whether these resources are crucial for student achievement after graduation (Caplan 2018; Dale and Krueger 2002; Wolf 2003). Some studies suggest that educational expenditure and university resources are modestly related to student learning outcomes for certain groups of students, for example freshmen (Pike et al. 2011; Winitzky-Stephens and Pickavance 2017). Instructor quality might also contribute to student outcomes. Cash et al. (2017) studied the relationship between perceptions and resources of large universities using a multidimensional approach to survey students and instructors, and found that instructors were the key determinant for students’ outcomes. In particular, in large universities, to make a class feel small to promote student achievement, the researchers argued effort should be placed on instructor quality and course structure as determined by instructors (Cash et al. 2017). Other university resources, including access to library and electronic databases—which correlate with university financial resources—also have been found to have a positive correlation with student performance (Montenegro et al. 2016).
Researchers have also studied the relationship between classroom diversity as well as diversity courses and students’ cognitive outcomes (Roksa et al. 2017; Gottfredson et al. 2008; Bowman 2013). Roksa et al. (2017), leveraging a longitudinal study following three cohorts of students from their first to their last year in college, found that diversity experiences were correlated with student cognitive outcomes, with the correlation being stronger for white students compared with non-white students. Gottfredson et al. (2008) studied 6800 incoming law students in a nationally representative sample and found that classroom diversity had a moderate positive effect on students “openness and enthusiasm to learn new ideas and perspectives” (p. 85). Bowman (2013) studied a longitudinal sample of 8615 first-year undergraduates at 49 universities and found that frequent diversity interactions were associated with gains in students’ outcomes including leadership skills, psychological well-being, intellectual engagement, and intercultural effectiveness. However, Martins and Walker (2006) found that students’ unobservable characteristics moderated student achievement significantly even when controlling for attendance, class size, peers, and teachers. With the interest in diversity demonstrated in the literature, in this study we used a college diversity index as a proxy for college diversity. This index, on a scale from 0 to 100, was obtained from the Chronicle of Higher Education database (The Chronicle of Higher Education forthcoming) through a membership subscription (https://www.chronicle.com/package/diversity/ accessed on 23 March 2022).
In the College Scorecard data, we had more than 6000 observations. However, there are also significant missing observations in this dataset. For example, among more than 6000 institutions, only 1300 of them reported average SAT scores at the institutional level. Some patterns we observed in this dataset are: (1) the average SAT score is 1060, (2) there is a wide range of admission rates, total enrollment, faculty salary, and cost to attend, for example; (3) the majority of institutions are private-for-profit. For the U.S. News dataset, we faced a less significant issue of missing data. We see that the majority of institutions in this dataset are private-not-for-profit institutions. More details can be found in Appendix A Table A1 and Table A2.

4.3. Statistical Methods

We used ordinary least squared (OLS) techniques to analyze the relationship between student aptitude, institutional factors, and student outcomes. First, we ran a model with only SAT scores on student outcomes to uncover the variance explained by student characteristics or cognitive aptitude alone (The Table A1, Table A2 and Table A3 in Appendix A include the full set of outcomes and results based on the broader sample of colleges and universities not restricted based on institutional factor availability for all cases). Second, we only used institutional variables which accounted for the cost of attending, university types (private, public, and for-profit), locale (urban, suburban, rural, and city), and regions (seven designated regions) in the model to obtain the percent variance explained by institutional characteristics. Third, we included both SAT and institutional factors in our final model. We added controls for university types, locales, and regions to account for plausible differences between types of universities in terms of their internal policies, as well as regional and locale differences that may contribute to variations in institutional outcomes. Our models are as follows:
Model 1: o u t c o m e i = β 0 + β 1 S A T i + ε i , where o u t c o m e i is the respective outcome for school i and S A T i is the average SAT score for that school.
Model 2: o u t c o m e i = β 0 + β 1 I i + Ω i + π i + ε i , where I i is a matrix of institutional-level variables, as mentioned, Ω is type of university fixed-effects, and Ω is location fixed effects.
Model 3: o u t c o m e i = β 0 + β 1 S A T i + β 2 I i + Ω i + π i + ε i , is the combined model from (1) and (2) where we study the joint explained variance by including both the SAT score and institutional variables. Errors are clustered at the state level.
Finally, to study the question of what explains the institutional level outcomes, including the ranking and average salary of graduates at early and mid-career points, we calculated two ratios. The ratio of the two robust R-squared values R m o d e l   1 2 R m o d e l   3 2 indicates how much of the respective outcome variance is explained by SAT scores when also accounting for institutional characteristics and average SAT scores. The ratio of the two robust R-squared values R m o d e l   2 2 R m o d e l   3 2 indicates how much of the respective outcome variance is explained by institutional factors when also accounting for SAT average score and institutional characteristics. We made sure that we used a sufficient sample size (N~200) for three models that included SAT average scores and institutional characteristic variables. We also dropped certain outliers in each faculty’s average monthly salary in the College Scorecard data. We dropped outliers by examining the variable’s distribution and summary statistics. We dropped observations that were beyond the lower and upper bounds (median +/− 1.5*inter-quartile range). Finally, we dropped data points with zero values in retention rates and admission rates in the two datasets.
In addition, we also included dominance analysis (DA) to determine the importance of independent variables (for further details, see Grömping 2007; Luchman 2015, 2021). This additional analysis is to provide a picture of what factors contribute the most to our model fit statistic. particularly, DA provides a “theory-grounded method for ascribing components of a fit metric to multiple, correlated independent variables” (Luchman 2015, p. 10).

5. Results

Table 2 and Table 3 present coefficients, standard errors, and R-squared values from OLS regressions. In model 1 and model 3 in Table 2, where the SAT average score at the institutional level was included, the estimated coefficients were statistically significant, indicating that the SAT average score was a statistically significant predictor of students’ short-term and long-term outcomes measured by salary. This result was replicated across the two datasets. Similarly, when looking at institutional rankings as reported in Table 3, SAT average scores were also a statistically significant predictor of college ranking. The higher the average SAT score, the higher the institution’s score in both the THE U.S. ranking and the Lumosity ranking (rankings are reversed in order).
Table 4 summarizes the proportion of variance explained in each respective outcome accounted for by average SAT scores even when accounting for institutional characteristics. Panel A presents data collected from the College Scorecard; Panel B represents data collected from U.S. News. In Panel A, we observe that by using the average SAT score only, we were able to account for 42% of the variation in the average salary six years out and 47% of the variation in the average salary ten years out. The explained variation in salary was smaller in Panel B. By using average SAT scores, we were able to explain 30% of the variance in the early-career salary and 41% of the variance in the mid-career salary.
In both datasets, average SAT scores accounted for more variance in the institutions’ rankings than students’ outcomes. In Panel A, 53% of the variance in the institutions’ THE U.S. ranking and 51% of the variance in the Lumosity ranking were accounted for by the change in average SAT scores. Similarly, in Panel B, 56% of the variance in the THE U.S. ranking and 56% of the variance in the Lumosity ranking in the second dataset can be accounted for by the change in average SAT scores.
In Table 4 Model 2 R2, we only included selected institutional factors as predictors of students’ outcomes and institutional rankings. When comparing the value of R2 in Model 1 and Model 2, except for the explained variation in the Lumosity rankings using the U.S News dataset, institutional factors accounted for more variation in student outcomes and the THE U.S. ranking than the average SAT scores. In particular, looking at R2 values for Model 2, in the College Scorecard dataset, institutional factors explained 57% of the short-term salary (six years out) and 64% of the long-term salary (ten years out). In the 2014 U.S. News data, institutional factors accounted for 38% of the variance in the average early-career salary and 53% of the variance in the average mid-career salary at the institutional level. In terms of rankings, institutional factors could account for between 75% (in the College Scorecard data) and 75% (in the U.S. News data) of variance in the THE U.S. ranking, and between 59% (in the College Scorecard data) and 52% (in the U.S. News data) of the variance in the Lumosity ranking.
When including both average SAT scores and institutional factors in the model, we observed increases in the explained variance of our outcome measures. We also examined the proportion of variance explained by calculating R-squared ratios. We compared two ratios: R m o d e l   1 2 R m o d e l 3 2 and R m o d e l 2   2 R m o d e l 3 2 . We found that, across the two datasets, institutional factors (taken collectively) appear to explain a greater amount of variation in students’ average salary at early- and mid-career points and the institutions’ THE U.S. ranking. However, it is worth noting that by using the average SAT alone, we could already explain a large portion of the variation in students’ outcomes and institutions’ rankings compared to other institutional factors.
Finally, we report findings using the College Scorecard dataset in Table 5 and U.S. News data in Table 6. In our multiple regression model predicting salary outcomes using the College Scorecard data, the top three predictors for six-year salary were: % of students who received a Pell grant, average SAT score, and retention rate (see Table 5 Panel A). For ten-year salary, the top predictors were retention rate, average SAT score, and % of students receiving a Pell grant (see Table 5 Panel B). In the U.S. News data, the top three predictors for early-career salary were average SAT scores, average freshmen retention rate, and endowment (see Table 6 Panel A). For mid-career salary, the top predictors were average SAT score, average freshmen retention rate, and room and board cost (see Table 6 Panel B).
Looking at rankings, specifically the THE U.S. ranking and Lumosity ranking, we found that the top three predictors of THE U.S. ranking using the College Scorecard data were completion rate, retention rate, and faculty salary (see Table 5 Panel C). For the Lumosity ranking the top three predictors were average SAT score, % of students who received a Pell grant, and retention rate (see Table 5 Panel D). Using the U.S. News data (see Table 6), we found the top three predictors for the THE U.S. ranking were retention rate, average SAT score, and total enrollment (see Panel C); the top predictors for the Lumosity ranking were average SAT score, retention rate, and endowment (see Panel D). Average SAT score and retention rate were the most significant predictors of both student long-term outcomes and institutional rankings across the two datasets.

6. Discussion

Overall, our findings aligned historically with much of the research on cognitive aptitudes and variance explained in outcomes, even after accounting for various institutional factors. However, this was from the perspective of cognitive aptitudes being the core variable of importance to consider as a starting point. On the flipside, when entering the multitude of institutional factors first into the regression model, these numerous variables collectively accounted for the majority of the variance in outcomes (in most cases larger than the proportion of variance accounted for by test scores alone), suggesting that institutional factors very likely do matter, in addition to student characteristics and cognitive aspects. Of course, test scores such as the SAT are just one short measure that students take prior to high school, so the fact that much of the variance in outcomes is captured by this singular measure should not be underemphasized. At the same time, this analysis illustrates that other institutional factors can matter collectively, and/or the contribution of student characteristics might be obscured or highlighted depending upon which variables one prioritizes in the research design and analysis. Depending upon the ways variables are prioritized and entered into regression models, findings can be quite different.
In the remaining part of this discussion, though we fully acknowledge that institutional factors play an important role in addition to student characteristics, we discuss our findings that link to the historical focus of the academic field focused on cognitive aptitudes, and consider our findings in that broader context, and through the lens of cognitive aptitudes’ usefulness.

6.1. Limitations

A core limitation of this study is that our research design is in the form to address a reverse causal question where we cannot isolate causes. Thus, we likely have omitted variable biases. However, because one purpose of the study was to determine whether the proportions of variance in student achievement due to students or to institutional factors aligned with the large historical literature going back to Coleman et al. (1966; see Detterman 2016, for a review) at the level of colleges and universities, our approach is appropriate to test whether these findings could be replicated in contemporary U.S. samples. Another possible limitation is that our findings are at the group rather than individual level, and could potentially reflect the ecological fallacy (e.g., Piantadosi et al. 1988), however, Angoff and Johnson (1990) found similar findings as ours at the individual level. Another limitation is that the outcomes we examined were restricted to various school rankings and to salary, which are only a limited set of educational and occupational outcomes. University rankings are an imperfect outcome given that the decision to apportion weights to various aspects is quite variable and reflect the policy decision of the ranker. However, our findings were replicated across many different types of rankings, which reflect numerous weighting formulas (especially see Appendix A Table A1 through Table A3). Additionally, salary is often a core outcome used in evaluating colleges and universities (e.g., Dale and Krueger 2002), and thus the outcomes used are appropriate, but are limited to what we were able to access based on the datasets used. Relatedly, we also have the issue of missing data. Our data were collected from multiple sources that may not adequately synchronize with one another. Therefore, even though at some point we had more than 6000 observations, after running multiple regressions, we were down to 200–300 observations. Our findings, therefore, are not necessarily representative of the broader domain of institutions.

6.2. Findings Replicate and Extend Those in K-12 to Higher Education and Also Historically

Despite these important limitations, our findings illustrate contemporary replications across two U.S. datasets at different time points at the level of colleges and universities, with the many studies reviewed in K-12 education, and also historically. Overall, the proportion of variance accounted for by student characteristics as indicated by average SAT/ACT scores or general cognitive aptitude—even after accounting for various institutional factors—was quite consistent across not only typical college rankings but also a critical thinking ranking and a Lumosity brain games ranking (see Table A1, Table A2 and Table A3 in Appendix A for the full range of analyses of rankings, excluding institutional factor controls). This suggests that even measures intended to assess supposedly unique constructs such as critical thinking (e.g., Butler et al. 2017) may in fact end up largely overlapping with general reasoning. Additionally, brain games such as those from Lumosity, which were intended to improve cognitive aptitudes, may end up largely measuring a latent learning g factor (e.g., Steyvers and Schafer 2020), which aligns with other research showing that even video games may be measuring cognitive aptitudes (e.g., Quiroga et al. 2015, 2019). Given that various rankings, such as U.S. News, only lightly weight SAT/ACT scores in their ranking formula and yet such scores account for the majority of the variance in those rankings suggests that much of university quality may actually be due to student quality at the point of selection (e.g., Dale and Krueger 2002; Wai et al. 2018). Of course, this does not rule out various dimensions of university education or impact, such as brand of degree in helping improve employment prospects, among other factors, but does provide bounds around thinking of the contribution of developed cognitive aptitudes at the point of testing and institutional or other factors and their contributions to long-run outcomes.
The proportion of variance accounted for by SAT/ACT scores or general cognitive aptitude on long-run salary was replicated across the U.S. News and College Scorecard datasets which used two different measurements of salary. Overall, College Scorecard data showed that approximately 47% of the variance in salary a decade after graduation was accounted for by such test scores and U.S. News, and PayScale data showed that approximately 41% of the variance in salary at mid-career was accounted for by test scores. Findings for salary, even after accounting for institutional factors, were consistently replicated across different career time points and datasets, ranging from 72% up through 74%.

6.3. Part of Student Outcomes May Be Due to Selection, but Teachers and Institutions Still Matter

In a classic paper, Dale and Krueger (2002) showed that once SAT scores were accounted for, there were no differences in long-run salary for students attending a highly selective school compared to those who attended a less selective school. This indicated the importance of selection on student characteristics—especially cognitive aptitudes (see also Angoff and Johnson 1990). Overall, the findings from this study align with the Dale and Krueger (2002) findings suggesting the importance of cognitive aptitudes before college in predicting outcomes well after college (e.g., Lubinski and Benbow 2020). This also aligns with other literature on selective high schools showing that student selection effects, perhaps more than school quality, may be driving differences in outcomes (e.g., Abdulkadiroglu et al. 2014; Dobbie and Fryer 2014; Dynarski 2018), as well as scholars who have argued that much of the impact of college or university may be attributable to selection (Caplan 2018; Wolf 2003). It appears that cognitive aptitudes remain an important threat to selection bias in forward causal inference approaches, and a more careful consideration of how cognitive aptitudes are important across the lifespan in relation to educational interventions and other policies is in order.
Teachers and other institutional factors do matter (as we illustrated by entering institutional factors first rather than cognitive aptitudes in our models). However, at least from the broad empirical historical perspective of cognitive aptitudes research, how and the extent to which institutional and educational factors can matter is bounded in some ways by this broader pattern of student characteristics, accounting for a large portion of the variance in long-run student outcomes. For example, Chetty et al. (2011, 2014) illustrated that teacher effects can have causal impacts on long-run earnings, and rigorous work on the differential effects of the types of schooling environments shows that institutional effects matter (e.g., Atteberry and McEachin 2020; Chetty et al. 2014; Dynarski et al. 2013; Wolf 2019), which also aligns with our finding when entering institutional factors prior to cognitive aptitude tests. Additionally, a great deal of literature supports the idea that parents’ education level, earnings, and social capital are important to the development of eventual student success (e.g., for a summary see Egalite 2016; Hair et al. 2015; Heckman 2000). The wide range of variables we examined in this study may be picking up some of these factors, by proxy. And even though the diversity index, as part of the institutional factors control in this study, did not appear to be a major factor in student outcomes, there may be other values to diversity that are not necessarily quantifiable or achievement-outcome-related, such as simply being exposed to a wide range of people from a unique range of backgrounds and circumstances. More broadly, the resources that an institution holds—such as access to top professors, other highly talented students, opportunities for research, prestige of brand, or alumni networks—can vary widely alongside student cognitive quality, which may serve to further amplify the outcomes of graduates. This may be in part why in the U.S. roughly half of numerous leaders in society have graduated from just a handful of elite institutions and likely, by proxy, have high developed cognitive aptitudes (e.g., Wai 2013; Wai et al. 2018; Wai and Perina 2018).

6.4. Conclusions and Future Directions

Taken from the lens of cognitive aptitudes as being important, this paper replicated and extended findings in two contemporary U.S. datasets at the level of universities extending decades of research at many levels of education, suggesting that a large portion of the variance in student outcomes may be due to student characteristics—in particular developed cognitive aptitude. When coupled with the large literature showing that general reasoning is related to numerous outcomes across the lifespan (Brown et al. 2021; Deary et al. 2007; Kuncel et al. 2004; Schmidt and Hunter 2004), these findings suggest that across at least the last half century the contribution of students to long-run student achievement has been underappreciated in U.S. education (Detterman 2016; Maranto and Wai 2020), an omitted set of variables in education (Schmidt 2017). This may also highlight the neglect of U.S. education research and policymakers regarding general cognitive aptitudes and individual differences in students across a more comprehensive range of well-studied individual differences characteristics (Lubinski 2020; Revelle et al. 2011). Various cognitive and noncognitive aptitudes might be fruitfully developed by education, but should also be accounted for when helping students receive a differentiated education in schools throughout their developmental trajectory (e.g., Lakin and Wai 2022).
Some fruitful avenues to explore taking individual differences in aptitude into account for more optimal talent or human capital development might be to more carefully examine what aspects of education could improve intelligence (Ceci 1991; Snow 1996; Ritchie and Tucker-Drob 2018), which educational-intervention effects persist and fade out when accounting for intelligence (e.g., Bailey et al. 2020), and differentiating instruction to more closely match individual differences and characteristics of students (e.g., Lakin and Wai 2020). More broadly, this research highlights the need for the approach of asking reverse causal questions to be integrated with the approach focused on forward causal inference (Wai and Bailey 2021), for education economists and policy researchers to pay more attention to the established structure of cognitive aptitudes as a threat to selection bias using forward causal inference tools (Schlotter et al. 2011), in addition to appreciating a broader methodological approach and integration of research evidence which is often found across disciplinary boundaries (e.g., Singer 2019). Ultimately, whether one thinks student characteristics or institutional characteristics matter more is highly dependent upon what research lens and historical evidence one brings to the table in one’s sample, research design, and analytical approach.

Author Contributions

Conceptualization, J.W. and B.T.; writing—original draft preparation, J.W. and B.T.; data analysis and presentation, B.T. and J.W. writing—review and editing, J.W. and B.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data was largely drawn from publicly available sources.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Summary statistics from college scorecard data 2017–2018.
Table A1. Summary statistics from college scorecard data 2017–2018.
Variable NMeanSDMinMax
SAT average13001060.170136.8497121555
Admission rate19960.6720.2050.0421
Total undergraduate enrollment63402450.5915509.9771100,011
Faculty salary/month41906495.0232426.60321627,570
Retention at 4-year institution21090.7290.1580.0621
% Students ever borrowed loan53820.7750.2220.0100.986
% Students received Pell grant32404960.22201
Cost per academic year359025,851.39014,439.030425985,308
Diversity index70749.33517.033089.2
Location 66271.8140.94714
   1 = city3180
   2 = Suburb2025
   3 = Town897
   4 = Rural525
Control70682.1390.83813
   1 = public2056
   2 = private non-profit1970
   3 = private for-profit3042
Salary 6 years after graduation545031,383.65011,626.79011,800151,500
Salary10 years after graduation 525939,399.14017,901.61014,600250,000
THE U.S. ranking983470.236250.5631800
Lumosity ranking443226.072131.1551456
Table A2. Summary statistics from U.S. News data.
Table A2. Summary statistics from U.S. News data.
Variable NMeanSDMinMax
SAT average13011056.328121.7786601395
Fall 2013 acceptance rate12930.6550.1710.0771
Total enrollment12007856.7239908.47116177,329
Endowment10732.01 × 1085.64 × 10865,7128.27 × 109
Average freshman retention rate12900.7510.1080.4200.970
Room and board12109822.2362383.622103223,386
% Graduating students ever borrowed loan9620.6910.1490.0830.990
Average student debt94728,119.6706373.642550050,275
Diversity index46249.13816.4949.389.2
Location11062.5221.00014
1 = city219
2 = Suburb287
3 = Town404
4 = Rural196
Control12261.7820.97513
1 = public478
2 = private non-profit745
3 = private for-profit3
Early-career salary85144,622.2105735.90230,80068,600
Mid-career salary85176,564.39013,439.6541,000131,800
THE U.S. ranking975484.673239.35115800
Lumosity ranking422238.768125.01415456
Table A3. SAT predicting college and university rankings using Model 1 with unrestricted samples.
Table A3. SAT predicting college and university rankings using Model 1 with unrestricted samples.
Panel A: College Scorecard Data 2016–2017
Coefficient for SAT ScoreRobust Standard ErrorNR2
U.S. News national ranking−0.384 ***0.0111870.798
U.S. News Liberal Arts ranking−0.367 ***0.0141220.825
Revealed preferences ranking−0.484 *0.203920.156
THE World ranking−0.981 ***0.0651490.450
Critical thinking ranking0.592 ***0.043420.684
Panel B: U.S. News Data 2014
Coefficient for SAT ScoreRobust Standard ErrorNR2
U.S. News national ranking−0.453 ***0.0161800.739
U.S. News Liberal Arts ranking−0.398 ***0.0201660.741
Revealed preferences ranking−0.187 ***0.025760.268
THE World ranking−0.996 ***0.1261360.284
Critical thinking ranking0.610 ***0.034680.716
*** p < 0.001, * p < 0.05.

References

  1. Abdulkadiroglu, Atila, Joshua Angrist, and Parag Pathak. 2014. The elite illusion: Achievement effects at Boston and New York exam schools. Econometrica 82: 137–96. [Google Scholar] [CrossRef] [Green Version]
  2. Angoff, William H., and Eugene G. Johnson. 1990. The differential impact of curriculum on aptitude test scores. Journal of Educational Measurement 27: 291–305. [Google Scholar] [CrossRef]
  3. Asbury, Kathryn, and Jonathan Wai. 2020. Viewing education policy through a genetic lens. Journal of School Choice 14: 301–15. [Google Scholar] [CrossRef]
  4. Atteberry, Allison C., and Andrew J. McEachin. 2020. Not where you start, but how much you grow: An addendum to the Coleman Report. Educational Researcher 49: 678–85. [Google Scholar] [CrossRef]
  5. Avery, Christopher N., Mark E. Glickman, Caroline M. Hoxby, and Andrew Metrick. 2013. A revealed preference ranking of U.S. colleges and universities. The Quarterly Journal of Economics 128: 425–67. [Google Scholar] [CrossRef]
  6. Bailey, Drew H., Greg J. Duncan, Flavio Cunha, Barbara R. Foorman, and David S. Yeager. 2020. Persistence and fadeout of educational intervention effects: Mechanisms and potential solutions. Psychological Science in the Public Interest 21: 55–97. [Google Scholar] [CrossRef] [PubMed]
  7. Baumert, Jurgen, Oliver Ludtke, Ulrich Trautein, and Martin Brunner. 2009. Large-scale student assessment studies measure the results of process knowledge acquisition: Evidence in support of a distinction between intelligence and student achievement. Educational Research Review 4: 165–76. [Google Scholar] [CrossRef]
  8. Binet, Alfred, and Theodore Simon. 1905. Méthode nouvelle pour le diagnostic du niveau intellectuel des anormaux. L’Année Psychologique 11: 191–244. [Google Scholar] [CrossRef]
  9. Bowman, Nicholas A. 2013. How much diversity is enough? the curvilinear relationship between college diversity interactions and first-year student outcomes. Research in Higher Education 54: 874–94. [Google Scholar] [CrossRef]
  10. Brown, Matt I., Jonathan Wai, and Christopher F. Chabris. 2021. Can you ever be too smart for your own good? Comparing linear and nonlinear effects of cognitive ability on life outcomes. Perspectives on Psychological Science 16: 1337–59. [Google Scholar] [CrossRef]
  11. Butler, Heather A., Christopher Pentoney, and Mabelle P. Bong. 2017. Predicting real-world outcomes: Critical thinking ability is a better predictor of life decisions than intelligence. Thinking Skills & Creativity 25: 38–46. [Google Scholar] [CrossRef]
  12. Byrne, Brian, William L. Coventry, Richard K. Olson, Sally J. Wadsworth, Stefan Samuelsson, Stephen A. Petrill, Erik G. Willcutt, and Robin Corley. 2010. “Teacher effects” in early literacy development: Evidence from a study of twins. Journal of Educational Psychology 102: 32–42. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Caplan, Bryan D. 2018. The Case against Education: Why the Education System Is a Waste of Time and Money. Princeton: Princeton University Press. [Google Scholar]
  14. Carroll, John B. 1993. Human Cognitive Abilities: A Survey of Factor Analytic Studies. Cambridge: Cambridge University Press. [Google Scholar]
  15. Cash, Ceilidh B., Jessa Letargo, Steffen P. Graether, and Shoshanah R. Jacobs. 2017. An analysis of the perceptions and resources of large university classes. CBE Life Sciences Education 16: ar33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Ceci, Stephen J. 1991. How much does schooling influence general intelligence and its cognitive components? A reassessment of the evidence. Developmental Psychology 27: 703–22. [Google Scholar] [CrossRef]
  17. Chabris, Christopher F. 2007. Cognitive and neurobiological mechanisms of the law of general intelligence. In Integrating the Mind: Domain General versus Domain Specific Processes in Higher Cognition. Edited by Maxwell J. Roberts. New York: Psychology Press, pp. 449–91. [Google Scholar]
  18. Chetty, Raj, John N. Friedman, and Jonah E. Rockoff. 2014. Measuring the impacts of teachers II: Teacher value-added and student outcomes in adulthood. American Economic Review 104: 2633–79. [Google Scholar] [CrossRef] [Green Version]
  19. Chetty, Raj, John N. Friedman, Nathaniel Hilger, Emmanuel Saez, Diane Whitmore Schanzenbach, and Danny Yagan. 2011. How does your kindergarten classroom affect your earnings? Evidence from Project STAR. The Quarterly Journal of Economics 126: 1593–660. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Chingos, Matthew M., Katharine M. Lindquist, and Grover J. "Russ" Whitehurst. 2014. School Superintendents: Vital or Irrelevant? Washington, DC: Brookings Institution. Available online: https://www.brookings.edu/research/school-superintendents-vital-or-irrelevant/ (accessed on 23 March 2022).
  21. Coleman, James S., Ernest Q. Campbell, Carol J. Hobson, James McPartland, Alexander M. Mood, Frederic D. Weinfeld, and Robert L. York. 1966. Equality of Educational Opportunity. Washington, DC: Government Printing Office. [Google Scholar]
  22. Coleman, William, and Edward E. Cureton. 1954. Intelligence and achievement: The “Jangle Fallacy” again. Educational and Psychological Measurement 14: 347–51. [Google Scholar] [CrossRef]
  23. Cunha, Jesse M., and Trey Miller. 2014. Measuring value-added in higher education: Possibilities and limitations in the use of administrative data. Economics of Education Review 42: 64–77. [Google Scholar] [CrossRef] [Green Version]
  24. Dale, Stacy Berg, and Alan B. Krueger. 2002. Estimating the payoff to attending a more selective college: An application of selection on observables and unobservables. The Quarterly Journal of Economics 117: 1491–527. [Google Scholar] [CrossRef] [Green Version]
  25. Deary, Ian J., Steve Strand, Pauline Smith, and Cres Fernandes. 2007. Intelligence and educational achievement. Intelligence 35: 13–21. [Google Scholar] [CrossRef]
  26. Detterman, Douglas K. 2016. Education an intelligence: Pity the poor teacher because student characteristics are more significant than teachers or schools. The Spanish Journal of Psychology 19: e93. [Google Scholar] [CrossRef] [Green Version]
  27. Dobbie, Will, and Rolad G. Fryer Jr. 2014. The impact of attending a school with high-achieving peers: Evidence from the New York City exam schools. American Economic Journal: Applied Economics 6: 58–75. [Google Scholar] [CrossRef] [Green Version]
  28. Dynarski, Susan M. 2018. Evidence on New York City and Boston Exam Schools. Brookings Institution. Available online: https://www.brookings.edu/research/evidence-on-new-york-city-and-boston-exam-schools/ (accessed on 23 March 2022).
  29. Dynarski, Susan, Joshua Hyman, and Diane Whitmore Schanzenbach. 2013. Experimental evidence on the effect of childhood investments on postsecondary attainment and degree completion. Journal of Policy Analysis and Management 32: 692–717. [Google Scholar] [CrossRef] [Green Version]
  30. Egalite, Anna J. 2016. How family background influences student achievement. EducationNext 16. Available online: https://www.educationnext.org/how-family-background-influences-student-achievement/ (accessed on 23 March 2022).
  31. Engelhardt, Lena, Frank Goldhammer, Oliver Ludtke, Olaf Koller, Jurgen Baumert, and Claus H. Carstensen. 2021. Separating PIAAC competencies from general cognitive skills: A dimensionality and explanatory analysis. Studies in Educational Evaluation 71: 101069. [Google Scholar] [CrossRef]
  32. Firkowska, Anna, Antonina Ostrowska, Magdalena Sokolowska, Zena Stein, Mervyn Susser, and Ignacy Wald. 1978. Cognitive development and social policy. Science 200: 1357–62. [Google Scholar] [CrossRef]
  33. Frey, Meredith C., and Douglas K. Detterman. 2004. Scholastic assessment or g? The relationship between the Scholastic Assessment Test and general cognitive ability. Psychological Science 15: 373–78. [Google Scholar] [CrossRef]
  34. Gamoran, Adam, and Daniel A. Long. 2007. Equality of educational opportunity: A 40-year retrospective. In International Studies in Educational Inequality, Theory and Policy. Edited by Richard Teese, Stephen Lamb, Marie Duru-Bellat and Sue Helme. Dordrecht: Springer. [Google Scholar] [CrossRef]
  35. Gelman, Andrew, and Guido Imbens. 2013. Why Ask Why? Forward Causal Inference and Reverse Causal Questions. NBER Working Paper 19614. Available online: https://www.nber.org/papers/w19614 (accessed on 23 March 2022).
  36. Goldhaber, Dan. 2015. Exploring the potential of value-added performance measures to affect the quality of the teacher workforce. Educational Researcher 44: 87–95. [Google Scholar] [CrossRef]
  37. Gottfredson, Nisha C., A. T. Panter, Charles E. Daye, Walter A. Allen, Linda F. Wightman, and Meera E. Deo. 2008. Does diversity at undergraduate institutions influence student outcomes? Journal of Diversity in Higher Education 1: 80–94. [Google Scholar] [CrossRef] [Green Version]
  38. Grasby, Katrina L., Callie W. Little, Brian Byrne, William L. Coventry, Richard K. Olson, Sally Larsen, and Stefan Samuelsson. 2019. Estimating classroom-level influences on literacy and numeracy: A twin study. Journal of Educational Psychology 112: 1154–66. [Google Scholar] [CrossRef]
  39. Grömping, Ulrike. 2007. Estimators of Relative Importance in Linear Regression Based on Variance Decomposition. The American Statistician 61: 139–47. [Google Scholar] [CrossRef]
  40. Hair, Nicole L., Jamie L. Hanson, Barbara L. Wolfe, and Seth D. Pollack. 2015. Association of child poverty, brain development, and academic achievement. JAMA Pediatrics 169: 822–29. [Google Scholar] [CrossRef]
  41. Hart, Sara A., Callie Little, and Elsje van Bergen. 2021. Nurture might nature: Cautionary tales and proposed solutions. Npj Science of Learning 6: 2. [Google Scholar] [CrossRef]
  42. Heckman, James J. 2000. Policies to foster human capital. Research in Economics 54: 3–56. [Google Scholar] [CrossRef] [Green Version]
  43. Hunt, Earl. 2009. Good news, bad news, and a fallacy: A review of outliers: The story of success. Intelligence 37: 323–24. [Google Scholar] [CrossRef]
  44. Huntington-Klein, Nick. 2020. Human capital versus signaling is empirically unresolvable. Empirical Economics 60: 2499–531. [Google Scholar] [CrossRef]
  45. Jencks, Christopher, Marshall Smith, Henry Acland, Mary Jo Bane, David Cohen, Herbert Gintis, Barbara Heyns, and Stephan Michelson. 1972. Inequality: A Reassessment of the Effect of Family and Schooling in America. New York: Basic Books. [Google Scholar]
  46. Johnson, Wendy, Thomas J. Bouchard Jr., Robert F. Krueger, Matt McGue, and Irving I. Gottesman. 2004. Just one g: Consistent results from three test batteries. Intelligence 32: 95–107. [Google Scholar] [CrossRef]
  47. Judge, Timothy A., Ryan L. Klinger, and Lauen S. Simon. 2010. Time is on my side: Time, general mental ability, human capital, and extrinsic career success. Journal of Applied Psychology 95: 92–107. [Google Scholar] [CrossRef] [Green Version]
  48. Kaufman, Scott Barry, Matthew R. Reynolds, Xin Liu, Alan S. Kaufman, and Kevin S. McGrew. 2012. Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock-Johnson and Kaufman tests. Intelligence 40: 123–38. [Google Scholar] [CrossRef]
  49. Kelley, Truman L. 1927. Interpretation of Educational Measurements. Yonkers: World Book Company. [Google Scholar]
  50. Koenig, Katherine A., Meredith C. Frey, and Douglas K. Detterman. 2008. ACT and general cognitive ability. Intelligence 36: 153–60. [Google Scholar] [CrossRef]
  51. Kulkarni, Siddharth, and Jonathan Rothwell. 2015. Beyond College Rankings: A Value-Added Approach to Assessing Two- and Four-Year Schools. Washington, DC: Brookings Institution, Available online: https://www.brookings.edu/research/beyond-college-rankings-a-value-added-approach-to-assessing-two-and-four-year-schools/ (accessed on 23 March 2022).
  52. Kuncel, Nathan R., Sarah A. Hezlett, and Deniz S. Ones. 2004. Academic performance, career potential, creativity, and job performance. Can one construct predict them all? Journal of Personality and Social Psychology 86: 148–61. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Lakin, Joni M., and Jonathan Wai. 2020. Spatially gifted, academically inconvenienced: Spatially talented students experience less academic engagement and more behavioural issues than other talented students. British Journal of Educational Psychology 90: 1015–38. [Google Scholar] [CrossRef] [PubMed]
  54. Lakin, Joni M., and Jonathan Wai. 2022. Developing student aptitudes as an important goal of learning. Gifted Child Quarterly 66: 95–97. [Google Scholar] [CrossRef]
  55. Li, Hongbin, Lingsheng Meng, Xinzheng Shi, and Binzhen Wu. 2012. Does attending elite colleges pay in China? Journal of Comparative Economics 40: 78–88. [Google Scholar] [CrossRef]
  56. Light, Audrey. 2001. In-school work experience and the returns to schooling. Journal of Labor Economics 19: 65–93. [Google Scholar] [CrossRef]
  57. Lohman, David F. 1993. Teaching and testing to develop fluid abilities. Educational Researcher 22: 12–23. [Google Scholar] [CrossRef]
  58. Lortie-Forgues, Hugues, and Matthew Inglis. 2019. Rigorous large-scale RCTs are often uninformative: Should we be concerned? Educational Researcher 48: 158–66. [Google Scholar] [CrossRef] [Green Version]
  59. Lubinski, David, and Camilla P. Benbow. 2020. Intellectual precocity: What have we learned since Terman? Gifted Child Quarterly 65: 3–28. [Google Scholar] [CrossRef]
  60. Lubinski, David. 2020. Understanding educational, occupational, and creative outcomes requires assessing intraindividual differences in abilities and interests. Proceedings of the National Academy of Sciences 117: 16720–22. [Google Scholar] [CrossRef]
  61. Luchman, Joseph N. 2015. Determining subgroup difference importance with complex survey designs: An application of weighted dominance analysis. Survey Practice 8: 1–10. [Google Scholar] [CrossRef] [Green Version]
  62. Luchman, Joseph N. 2021. Determining relative importance in Stata using dominance analysis: Domin and domme. The Stata Journal 21: 510–38. [Google Scholar] [CrossRef]
  63. Lykken, David T. 1968. Statistical significance in psychological research. Psychological Bulletin 70: 151–59. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Maranto, Robert, and Jonathan Wai. 2020. Why intelligence is missing from American education policy and practice, and what can be done about it. Journal of Intelligence 8: 2. [Google Scholar] [CrossRef] [Green Version]
  65. Martins, Pedro S., and Ian Walker. 2006. Student Achievement and University Classes: Effects of Attendance, Size, Peers, and Teachers. IZA Discussion Paper No. 2490. Available online: https://ssrn.com/abstract=955298 (accessed on 23 March 2022).
  66. Montenegro, Maximiliano, Paula Clasing, Nick Kelly, Carlos Gonzalez, Magdalena Jara, Rosa Alarcón, Augusto Sandoval, and Elvira Saurina. 2016. Library resources and students’ learning outcomes: Do all the resources have the same impact on learning? The Journal of Academic Librarianship 42: 551–56. [Google Scholar] [CrossRef] [Green Version]
  67. Neill, Christine. 2015. Rising student employment: The role of tuition fees. Education Economics 23: 101–21. [Google Scholar] [CrossRef]
  68. Neyt, Brecht, Eddy Omey, Dieter Verhaest, and Stijn Baert. 2018. Does student work really affect educational outcomes? A review of the literature. Journal of Economic Surveys 33: 896–921. [Google Scholar] [CrossRef] [Green Version]
  69. Piantadosi, Steven, David P. Byar, and Sylvan B. Green. 1988. The ecological fallacy. American Journal of Epidemiology 127: 893–904. [Google Scholar] [CrossRef]
  70. Pike, Gary R., George D. Kuh, Alexander C. McCormick, Corinna A. Ethington, and John C. Smart. 2011. If and when money matters: The relationships among educational expenditures, student engagement and students’ learning outcomes. Research in Higher Education 52: 81–106. [Google Scholar] [CrossRef]
  71. Quiroga, Maria Angeles, Alice Diaz, Francisco J. Roman, Jesus Privado, and Roberto Colom. 2019. Intelligence and video games: Beyond “brain games”. Intelligence 75: 85–94. [Google Scholar] [CrossRef]
  72. Quiroga, Maria Angeles, Sergio Escorial, Francisco J. Roman, Daniel Morillo, Andrea Jarabo, Jesus Privado, Miguel Hernandez, Borja Gallego, and Roberto Colom. 2015. Can we reliably measure the general factor of intelligence (g) through commercial video games? Yes, we can! Intelligence 53: 1–7. [Google Scholar] [CrossRef]
  73. Ree, Malcolm James, and James A. Earles. 1991. The stability of g across different methods of estimation. Intelligence 15: 271–78. [Google Scholar] [CrossRef]
  74. Revelle, William, Joshua Wilt, and David M. Condon. 2011. Individual differences and differential psychology: A brief history and prospect. In The Wiley-Blackwell Handbook of Individual Differences. Edited by Tomas Chamorro-Premuzic, Sophie von Stumm and Adrian Furnham. Hoboken: Wiley-Blackwell, pp. 3–38. [Google Scholar]
  75. Rindermann, Heiner. 2007. The g-factor of international cognitive ability comparisons: The homogeneity of results in PISA, TIMSS, PIRLS, and IQ-tests across nations. European Journal of Personality 21: 667–706. [Google Scholar] [CrossRef]
  76. Ritchie, Stuart J., and Elliot M. Tucker-Drob. 2018. How much does education improve intelligence? A meta-analysis. Psychological Science 29: 1358–69. [Google Scholar] [CrossRef]
  77. Roksa, Josipa, Cindy Ann Kilgo, Teniell L. Trolian, Ernest T. Pascarella, Charles Blaich, and Kathleen S. Wise. 2017. Engaging with diversity: How positive and negative diversity interactions influence students’ cognitive outcomes. The Journal of Higher Education (Columbus) 88: 297–322. [Google Scholar] [CrossRef]
  78. Roohr, Katrina Crotts, Margarita Olivera-Aguilar, and Ou Lydia Liu. 2021. Value Added in Higher Education: Brief History, Measurement, Challenges, and Future Directions. In Learning Gain in Higher Education (International Perspectives on Higher Education Research, Vol. 14). Edited by Christina Hughes and Malcolm Tight. Bingley: Emerald Publishing Limited, pp. 59–76. [Google Scholar] [CrossRef]
  79. Schlotter, Martin, Guido Schwerdt, and Ludger Woessman. 2011. Econometric methods for causal evaluation of education policies and practices: A non-technical guide. Education Economics 19: 109–37. [Google Scholar] [CrossRef]
  80. Schmidt, Frank L. 2017. Beyond questionable research methods: The role of omitted relevant research in the credibility of research. Archives of Scientific Psychology 5: 32–41. [Google Scholar] [CrossRef]
  81. Schmidt, Frank L., and John Hunter. 2004. General mental ability in the world of work: Occupational attainment and job performance. Journal of Personality and Social Psychology 86: 162–73. [Google Scholar] [CrossRef] [Green Version]
  82. Schult, Johannes, and Jorn R. Sparfeldt. 2016. Do non-g factors of cognitive ability tests align with specific academic achievements? A combined bifactor modeling approach. Intelligence 59: 96–102. [Google Scholar] [CrossRef]
  83. Sims, Sam, Jake Anders, Matthew Inglis, and Hugues Lortie-Forgues. 2020. Quantifying ‘Promising Trials’ Bias in Randomized Controlled Trials in Education. Working Paper No. 20-16. London: University College London, Centre for Education Policy and Equalising Opportunities (CEPEO). [Google Scholar]
  84. Singer, Judith D. 2019. Reshaping the arc of quantitative educational research: It’s time to broaden our paradigm. Journal of Research on Educational Effectiveness 12: 570–93. [Google Scholar] [CrossRef]
  85. Snow, Richard E. 1996. Aptitude development and education. Psychology, Public Policy, and Law 2: 536–60. [Google Scholar] [CrossRef]
  86. Spearman, Charles. 1904. “General intelligence,” objectively determined and measured. The American Journal of Psychology 15: 201–92. [Google Scholar] [CrossRef]
  87. Steyvers, Mark, and Robert J. Schafer. 2020. Inferring latent learning factors in large-scale cognitive training data. Nature Human Behavior 4: 1145–55. [Google Scholar] [CrossRef] [PubMed]
  88. Stinebrickner, Todd R., and Ralph Stinebrickner. 2007. The causal effect of studying on academic performance. In National Bureau of Economic Research Working Paper 13341; Cambridge, MA: National Bureau of Economic Research. [Google Scholar] [CrossRef]
  89. The Chronicle of Higher Education. forthcoming. Diversity Indexes. The Chronicle of Higher Education. Available online: www.chronicle.com/package/diversity-indexes/ (accessed on 23 March 2022).
  90. Thorndike, Robert M., and David F. Lohman. 1990. A Century of Ability Testing. Chicago: The Riverside Publishing Company. [Google Scholar]
  91. U.S. Department of Education College Scorecard. 2017–2018. Available online: https://collegescorecard.ed.gov/ (accessed on 23 March 2022).
  92. Wai, Jonathan, and Drew H. Bailey. 2021. How intelligence research can inform education and public policy. In The Cambridge Handbook of Intelligence and Cognitive Neuroscience. Edited by Aron K. Barbey, Sherif Karama and Richard J. Haier. Cambridge: Cambridge University Press, pp. 434–47. [Google Scholar] [CrossRef]
  93. Wai, Jonathan, and Joni M. Lakin. 2020. Finding the missing Einsteins: Expanding the breadth of cognitive and noncognitive measures used in academic services. Contemporary Educational Psychology 63: 101920. [Google Scholar] [CrossRef]
  94. Wai, Jonathan, and Kaja Perina. 2018. Expertise in journalism: Factors shaping a cognitive and culturally elite profession. Journal of Expertise 1: 57–78. Available online: https://www.journalofexpertise.org/articles/volume1_issue1/JoE_2018_1_1_Wai_Perina.html (accessed on 23 March 2022).
  95. Wai, Jonathan, Matt I. Brown, and Christopher F. Chabris. 2018. Using standardized test scores to include general cognitive ability in education research and policy. Journal of Intelligence 6: 37. [Google Scholar] [CrossRef] [Green Version]
  96. Wai, Jonathan. 2013. Investigating America’s elite: Cognitive ability, education, and sex differences. Intelligence 41: 203–11. [Google Scholar] [CrossRef]
  97. Whitehurst, Grover J. “Russ”, Matthew M. Chingos, and Michael R. Gallaher. 2013. Do School Districts Matter? Washington, DC: Brookings Institution. Available online: https://www.brookings.edu/research/do-school-districts-matter/ (accessed on 23 March 2022).
  98. Winitzky-Stephens, Jessie R., and Jason Pickavance. 2017. Open educational resources and student course outcomes: A multilevel analysis. International Review of Research in Open and Distributed Learning 18: 35–49. [Google Scholar] [CrossRef] [Green Version]
  99. Wolf, Alison. 2003. Does Education Matter? Myths about Education and Economic Growth. London: Penguin Global. [Google Scholar]
  100. Wolf, Patrick, ed. 2019. School Choice: Separating Fact from Fiction. New York: Routledge. [Google Scholar]
Table 1. Variable selection and description.
Table 1. Variable selection and description.
DatasetU.S. News 2014College Scorecard 2017–2018
Variables
Student characteristics
SAT scoreAverage SAT scoreAverage SAT score
Outcomes
EarningPayScale early-career6 years after graduation
PayScale mid-career10 years after graduation
Ranking
THE U.S. RankingTHE U.S. Ranking
Lumosity RankingLumosity Ranking
Institutional factors
Cost of attendingRoom and boardCost per academic year
Admission, enrollment, and completionTotal enrollmentTotal enrollment
Acceptance rateAdmission rate
Six-year completion rate
Average freshmen retention rateRetention at 4-year-institution
University resourcesEndowmentFaculty average salary/month
% Graduating students ever borrowed loan% Students ever borrowed
Average student debt% Students received Pell grants
DiversityDiversity indexDiversity index
Table 2. OLS regression coefficients of SAT/ACT and institutional factors predicting salary.
Table 2. OLS regression coefficients of SAT/ACT and institutional factors predicting salary.
Panel A: College Scorecard Data 2016–2017
6-year Salary10-year Salary
Model 1 (SAT)Model 2 (Institutional factors)Model 3 (SAT + institutional factors)Model 1 (SAT)Model 2 (Institutional factors)Model 3 (SAT + institutional factors)
Coefficient for SAT score44.228 ***n/a16.387 *65.634 ***n/a22.039 *
(3.767) (7.955)(5.299) (9.651)
N341341341341341341
R20.4190.5690.5870.4700.6410.653
Panel B: U.S. News Ranking Data 2014
Early-career SalaryMid-career Salary
Model 1 (SAT)Model 2 (Institutional factors)Model 3 (SAT + institutional factors)Model 1 (SAT)Model 2 (Institutional factors)Model 3 (SAT + institutional factors)
Coefficient for SAT score29.207 ***n/a19.093 ***75.686 ***n/a40.299 ***
(3.853) (6.442)(5.702) (10.983)
N264264264264264264
R20.2960.3780.41600.4120.5250.560
Robust clustered standard errors at state level are in parentheses. *** p < 0.001, * p < 0.05.
Table 3. OLS coefficients for SAT predicting college and university rankings.
Table 3. OLS coefficients for SAT predicting college and university rankings.
Panel A: College Scorecard Data 2016–2017Coefficient for SAT ScoreStandard ErrorNR2
THE U.S. rankingModel 1 (SAT)−1.527 ***0.0862800.526
Model 2 (institutional factors)n/a 2800.754
Model 3 (SAT + institutional factors)−0.585 **0.1472800.770
Lumosity rankingModel 1 (only SAT)−0.847 ***0.0412040.514
Model 2 (institutional factors control)n/a 2040.585
Model 3 (SAT + institutional factors)−0.460 ***0.1292040.615
Panel B: U.S. News Data 2014Coefficient for SAT ScoreStandard ErrorNR2
THE U.S. rankingModel 1 (SAT)−1.717 ***0.0752690.562
Model 2 (institutional factors)n/a 2690.746
Model 3 (SAT + institutional factors)−0.745 ***0.1262690.778
Lumosity rankingModel 1 (SAT)−0.929 ***0.0501990.564
Model 2 (institutional factors)n/a 1990.518
Model 3 (SAT + institutional factors)−0.742 ***0.0881990.626
Robust clustered standard errors at state level are in parentheses. *** p < 0.001, ** p < 0.01.
Table 4. Proportion of explained variance using Model 1, Model 2, and Model 3.
Table 4. Proportion of explained variance using Model 1, Model 2, and Model 3.
VariableModel 1 R2Model 2 R2Model 3 R2 R m o d e l   1 2 R m o d e l 3 2 R m o d e l 2 2 R m o d e l 3 2
Panel A: College Scorecard dataset 2017–2018
6-year salary0.4190.5690.5810.7210.979
10-year salary0.4700.6410.6530.7200.982
THE U.S. ranking0.5260.7540.7700.6830.979
Lumosity ranking0.5140.5850.6150.8360.951
Panel B: U.S. News Data 2014
Early-career salary0.2960.3780.4160.7120.909
Mid-career salary0.4120.5250.5600.7360.938
THE U.S. ranking0.5620.7460.7780.7220.959
Lumosity ranking0.5640.5180.6260.9010.827
Table 5. OLS Coefficients and importance of predictors of student long-term outcomes and school rankings using Model 3, College Scorecard data.
Table 5. OLS Coefficients and importance of predictors of student long-term outcomes and school rankings using Model 3, College Scorecard data.
CoefficientsStandard ErrorStandardized Dominance StatisticRanking
Panel A: mean salary six years after graduation
Average SAT score16.387 **5.2490.1882
Cost per academic year−0.1350.0960.0288
Admission rate−3136.9351883.9760.0169
Retention at 4-year-institution18,836.437 **6150.6690.1573
Completion rate−6873.1584395.9320.1275
Total enrollment−0.1372 **0.0430.0407
Faculty average salary/month1.113 ***0.2280.1524
% Students ever borrowed11325.5108586.5350.00511
% Students received Pell grants−20,357.645 ***3811.7160.2211
Diversity index82.190 ***23.5750.0556
Location n/an/a0.01010
School controln/an/a0.00012
Panel B: Mean salary ten years after graduation
Average SAT score22.039 **6.7470.1772
Cost per academic year0.0050.1240.0438
Admission rate−5679.633 *2421.4900.0269
Retention at 4-year-institution28,917.060 ***7905.5030.1781
Completion rate−2061.1465650.1270.1495
Total enrollment−0.228 ***0.0550.0447
Faculty average salary/month1.514 ***0.2930.1554
% Students ever borrowed10,353.67011,036.3400.00511
% Students received Pell grants−20,776.170 ***4899.2280.1633
Diversity index103.223 ***30.3020.0496
Location n/an/a0.01110
School controln/an/a0.00012
Panel C: THE U.S. ranking
Average SAT score−0.585 ***0.1300.1524
Cost per academic year−0.009 ***0.0030.1195
Admission rate59.20060.2100.0319
Retention at 4-year-institution−108.600208.1000.1532
Completion rate−543.9 ***116.1000.1771
Total enrollment−0.0020.0010.0916
Faculty average salary/month−0.033 ***0.0070.1533
% Students ever borrowed203.800230.4000.00711
% Students received Pell grants−331.400 ***101.2000.0627
Diversity index−1.249 **0.5690.0388
Location n/an/a0.01910
School controln/an/a0.00012
Panel D: Lumosity ranking
Average SAT score−0.460 ***0.1200.2581
Cost per academic year0.0040.0020.0466
Admission rate−83.44951.1890.0238
Retention at 4-year-institution−165.522202.0470.1713
Completion rate−162.704100.6350.1574
Total enrollment0.0010.0010.0327
Faculty average salary/month−0.0060.0050.0685
% Students ever borrowed14.267183.9070.00611
% Students received Pell grants346.395 **116.3980.2212
Diversity index−0.3270.6460.00810
Location n/an/a0.0099
School controln/an/a0.00012
Robust standard errors clustered at state level. *** p < 0.001, ** p < 0.01, * p < 0.05.
Table 6. OLS coefficients and importance of predictors of student long-term outcomes and school rankings using Model 3, U.S. News data.
Table 6. OLS coefficients and importance of predictors of student long-term outcomes and school rankings using Model 3, U.S. News data.
CoefficientsStandard ErrorStandardized Dominance StatisticRanking
Panel A: Pay-Scale early-career
Average SAT score19.092 **7.0110.2831
Total enrollment−0.0210.0380.0955
Acceptance rate4311.0672195.3870.0288
Average freshmen retention10,510.41 *5154.3870.2002
Endowment8.82 × 10−7 ***2.25 × 10−70.1193
Room and board0.1530.1410.0447
% graduating students ever borrowed loan−4187.3573344.8030.0946
Average student debt0.177 *0.0750.0289
Diversity index89.793 ***25.0930.0984
Location n/an/a0.00811
School controln/an/a0.00410
Panel B: Pay-scale mid-career
Average SAT score40.299 ***10.7570.2731
Total enrollment−0.0370.0740.0844
Acceptance rate7188.7054375.6670.0368
Average freshmen retention38,653.78 ***11,396.040.2682
Endowment9.93 × 10−76.12 × 10−70.0697
Room and board0.841 **0.2930.0993
% graduating students ever borrowed loan−6762.3755265.5260.0765
Average student debt0.279 *0.1330.0159
Diversity index175.039 ***48.5310.0746
Location n/an/a0.00311
School controln/an/a0.00510
Panel C: THE U.S. Ranking
Average SAT score−0.745 ***0.1240.2532
Total enrollment−0.0020.0010.1203
Acceptance rate−72.73761.7160.0418
Average freshmen retention−1054.54 ***137.2750.2991
Endowment−2.58 × 10−8 *1.09 × 10−80.0844
Room and board−0.014 ***0.0040.0705
% graduating students ever borrowed loan2.29571.2400.0437
Average student debt−0.008 ***0.0020.0209
Diversity index−3.025 ***0.6280.0536
Location n/an/a0.00611
School controln/an/a0.01010
Panel D: Lumosity ranking
Average SAT score−0.742 ***0.1040.4321
Total enrollment−0.0010.0010.0734
Acceptance rate−142.921 **53.4960.0426
Average freshmen retention−273.882 *131.0680.2552
Endowment−8.90 × 10−94.55 × 10−90.0783
Room and board−0.0040.0040.0287
% graduating students ever borrowed loan−27.62659.6790.0605
Average student debt0.0030.0020.0188
Diversity index0.6160.5940.0139
Location n/an/a0.00110
School controln/an/a0.00011
Robust standard errors clustered at state level. *** p < 0.001, ** p < 0.01, * p < 0.05.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wai, J.; Tran, B. Student Characteristics, Institutional Factors, and Outcomes in Higher Education and Beyond: An Analysis of Standardized Test Scores and Other Factors at the Institutional Level with School Rankings and Salary. J. Intell. 2022, 10, 22. https://doi.org/10.3390/jintelligence10020022

AMA Style

Wai J, Tran B. Student Characteristics, Institutional Factors, and Outcomes in Higher Education and Beyond: An Analysis of Standardized Test Scores and Other Factors at the Institutional Level with School Rankings and Salary. Journal of Intelligence. 2022; 10(2):22. https://doi.org/10.3390/jintelligence10020022

Chicago/Turabian Style

Wai, Jonathan, and Bich Tran. 2022. "Student Characteristics, Institutional Factors, and Outcomes in Higher Education and Beyond: An Analysis of Standardized Test Scores and Other Factors at the Institutional Level with School Rankings and Salary" Journal of Intelligence 10, no. 2: 22. https://doi.org/10.3390/jintelligence10020022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop