Wednesday, February 7, 2018

How Business School Deans Would Change MBA Rankings - Poets&Quants

Want to stir up an argument in the faculty lounge? It only takes one word: rankings.

They are the bane of academia, a crude instrument used to measure learning – an elusive and evolving mark that can take years to fully materialize. Some rankings are structured, often inadvertently, to confer advantages to established brands. Others offer perverse incentives for schools to “game the system” and calibrate their investments to maximize their return-on-ranking. Indeed, rankings are often lagging indicators, diffuse and fickle, that invariably cost some educators their jobs.

MAKING WHAT’S INTANGIBLE INTO SOMETHING LINEAR

Anjani Jain, the acting dean at the Yale School of Management, frames the conundrum represented by rankings this way: “How do you take this multidimensional and multifaceted complexity of a higher educational institution and reduce it to a linear order?”

Leading outlets apply a mix of methodologies to do just that. Bloomberg Businessweek leans heavily towards surveys, with responses from students, alumni, and recruiters accounting for 80% of the ranking’s weight. U.S. News is far more quantitative, with inputs (GMATs and GPAs) and outputs (pay and placement) supplemented with recruiter and academic surveys. Forbes pegs performance by focusing exclusively on pay growth, while the Financial Times and The Economist rankings are a potpourri of nearly conceivable benchmark.

Dean  Glenn Hubbard, Columbia Business School

Three years ago, Columbia Business School’s Dean Glenn Hubbard even crafted his own ranking, which emphasized demand (applications per seat), choice (yield), average pay, and job offers. Predictably, his school lost a spot under his methodology. Still, it was a good faith effort to grapple with the painful tradeoffs inherent to compiling an MBA ranking. Recently, Poets&Quants surveyed four leading business schools to get their take on rankings. Notably, they shared the data that they consider relevant, if not indispensable. They also outlined the variables and assumptions that produce volatility and distortion. At the same time, they offered alternative ways for students to measure school performance and identify the best fits for themselves.

STUDENT AND ALUMNI SURVEYS RIDDLED WITH CONFLICT OF INTEREST

Make no mistake: business school deans parse through rankings very carefully. Peter Rodriguez, dean of Rice University’s Jones Graduate School of Business, is among them. The overall rank is just window dressing to him. The real value comes from the collection of underlying “raw” data, such as placement rates. An added benefit? The data is broken into columns so he can easily compare schools side-by side. “All of us look at the rankings because they often measure things we care about in the absence of rankings,” he tells P&Q.“ They give you a sense of what the marketplace looks like and your place within it.”

Notably, Rodriguez assesses data closely related to student capabilities, such as GMATs and GPAs. Yield rate – a school’s ability to convert accepted applicants into full-time students – is another piece of data he values. However, he pays special attention to outcomes like starting pay. “I have more confidence in the market outcomes than I do with some of the ways that quality is measured that don’t necessarily reflect what students want,” he explains. “I always feels like the quality of the school is best measured by rolling up student choices, employer choices, and research outcomes that are hard and measurable.”

Rankings can also tip deans off to how their school brand is perceived in the marketplace. François Ortalo-Magné, dean of the London Business School, tells P&Q that alumni surveys spark his curiosity. That said, he is under no illusions about this tool. In his experience, alumni surveys embody a conflict of interest, where the sample carries a vested interested in the outcome. “The survey of opinions – it can be as valuable as a beauty contest,” he says. “There is a complication with asking people for their opinions when they know their opinions will be used for rankings.”

Rice’s Rodriguez feels a similar push-pull with alumni surveys. “They’ll say, ‘Sure, it was a great experience’ even if it wasn’t if they know they are better off doing that. It’s probably why students want first-hand accounts as much as they can. They know the number doesn’t tell them quite enough.”

METHODOLOGIES AND WEIGHTS IMPLY SOME BIAS

Scott DeRue, Dean of the Ross School of Business at the University of Michigan

Scott DeRue, dean at the University of Michigan’s Ross School of Business, preserves a soft spot for employer surveys in rankings. He views the relationship between business schools and employers in terms similar to supply-and-demand, with programs being accountable for the quality of talent they furnish. “No single metric is perfect in [regards to quality], “especially considering that organizations define quality differently based on their unique values and needs,” he concedes in a written statement to P&Q.  For this reason, employer surveys that assess quality of talent are particularly insightful. Employers are also the only source of information that sees across programs, which is a vital perspective for evaluating relative quality.”

While rankings collect distant data and conduct intensive surveys, they also come with several drawbacks according to the deans. One stems from the nature of rankings themselves. Regardless of intent, LBS’ Ortalo-Magné observes that rankings are designed with certain biases towards particular measures. “The weights and aggregation – that’s a bit more complicated because that starts with a certain value function on the value of certain pieces of data,” he explains. “It Implies tradeoffs across the data. The way the data aggregates implies a particular stance on the variation of one metric as opposed to variation in the other metric. That I find much less valuable.”

The numbers of variables measured – and the weights they are assigned – are also concern for Yale’s Jain. He calls simplicity “a virtue,” contending that a narrower focus is best as users often place different weights on different variables themselves. “Making this calculus overly complicated – relying on factors of data that are not easily measurable or relying on surveys that tend to get lower response rates – is not always helpful. It makes the calculation too elaborate. A more Parsimonious design of a ranking in terms of what variables are being measured is likely to be more robust.”

Dean François Ortalo-Magné, London Business School. Courtesy photo

HOW DO YOU MEASURE CULTURE AND MISSION?

Along with being too convoluted, Jain adds that rankings take the power out of the hands of users. His solution? Craft custom tools where applicants can apportion the weights to the variables that hold the greatest appeal to them and tag the results to particular programs. He touts the University of Washington’s “Do It Yourself” platform as an example of such an interface.

Another deep-rooted flaw with rankings involves differentiation. Ortalo-Magné points to a P&Q article on 10 Business Schools to Watch, which uses a mix of stats, cultural attributes, and recent initiatives to identify MBA programs on the rise. He then contrasts the story with rankings, which he says wrongly assumes that business schools pursue the same mission and haven’t differentiated themselves from peers in the marketplace.

Rodriguez echoes Ortalo-Magné’s sentiments. It can be hard to use a single measure like a ranking or a single group of measures without knowing something about a school’s particular aim,” he shares. “We don’t aspire all to the same goals – and therefore some rankings are better for some than others. My school – and my prior schools – have been fairly global, but it is certainly not true of a lot of schools. That may not be their appropriate mission or aim.”

Rodriguez points to employment metrics as an example. “The organizations to which schools supply graduates is probably is rather coarsely measured. We all tend to be measured against a very national and even global group of companies – but some schools have something more narrow as their appropriate target.”

VOLATILITY UNDERCUTS CREDIBILITY

Then, there is the volatility of rankings, where a Duke Fuqua can bounce from 1st to 8th to 3rd over three years in the Bloomberg Businessweek rankings or USC Marshall can somehow leap 25 spots in just one year with The Economist. Such swings undermine the credibility of business school rankings in Jain’s view – and he knows exactly where the blame falls. “Some of that volatility may reflect real change underlying the school’s curricular experience, but some of it is simply the result of methodology, such as survey response rates being as low as 30% (or lower).”

Yale SOM Acting Dean Anjani Jain

Surveys are hardly the only issue, Jain notes. He argues that using certain metrics also creates loopholes in how data is measured or reported. For example, employment rates can give some schools a decided advantage over their peers. According to Jain, the MBA CSEA (Career Services and Employer Alliance) standards fix reporting requirements at 85%. In other words, schools can meet its threshold despite lacking placement data on 15% of its graduating class. In context, this means a supersized program at Wharton could conceivably miss over 125 students in their reporting and still meet the threshold.

Turns out, this was more than a theoretical weakness. In the 2018 U.S. News rankings, Jain notes, there were a couple of schools where schools lacked data on nearly 10% of the class. That creates quite a dilemma, where pay and placement constitutes 35% of the ranking’s weight. “The population of non-reports is not likely to be an unbiased sample of all students,” Jain argues. “It’s quite likely that the non-reporting percentage of population of the class has lower employment rate. By excluding that group completely, schools end up artificially inflating their employment rate.”

The solution to this, Jain adds, is already in place in the Financial Times ranking. “They use not just the employment rate, but they multiply the employment rate by the percentage of class reporting. By doing this, they are saying that you cannot effectively claim that those who are not reporting are under employment. They penalize schools and close the loophole.”

‘YOU CAN ONLY GO TO ONE SCHOOL’

Which ranking metrics concern deans? Ross’ DeRue, for one, worries about the harm that can come from focusing on pay – and for good reason. For him, pay correlates more to industry proportion than quality. “The rankings presume programs that average a higher salary level are somehow better programs relative to those programs that average a lower salary, yet salary differences are mostly a function of industry rather than program quality. The intense focus on salary also creates perverse incentives for schools to find legally permissible but ethically questionable ways to inflate salary information.”

Jain also worries that pay data hurts programs that cater to graduates who hail from outside the elite professions and firms. “A number of surveys measure the difference between salary at graduation and 3-5 years,” he observes. “Sometimes the same surveys are using incoming salaries, which is a different measure in itself. So if a school tends to attract students from poor socioeconomic backgrounds or work in lower paying jobs like the public sector or non-profits, they’ll get penalized if the entering salary is measured.”

Surveys, such as U.S. News’ ranking of business programs by academics, also rankle deans to an extent. Rodriguez jokes that no student goes to two MBA programs, which makes him wonder how deans or MBA directors can possess the first-hand knowledge to judge the success of far-away peer schools. “There is a big brand effect,” he admits. “It’s particularly true for schools that are smaller, younger or more regional. That will be Interpreted as being less familiar.”

METHODOLOGIES USED TO GENERATE PAGE VIEWS

Harvard Business School

This “brand effect” also creates a self-fulfilling prophecy that weighs on respondents who evaluate other programs, Rodriguez adds. “I think the analogy is probably from banking: If you owe the bank a $1000 and you can’t pay, that’s your problem. If you owe the bank a billion dollars and you can’t pay, that’s the bank’s problem. If HBS doesn’t show up high enough on your ranking, that’s the rankings’ problem. It just makes it hard to think about the programs. The problem with peer rankings is that we just don’t know enough about each other.”

A lack of familiarity and potential bias aren’t the only issues dogging ranking surveys. Like Jain, DeRue points to volatility, though he attributes that more to nurture than nature. “Some rankings’ agencies have concluded that they need or want volatility in the rankings to drive engagement (e.g., readership),” he asserts. “In some cases, this results in the agencies tweaking criteria without any clear reason.”

At the same time, DeRue wonders whether certain rubrics, such as the recruiter scores, truly differentiate programs from each other. “Most rankings create a “black box” around the use of ordinal rankings,” he says. “For example, if school A’s employer survey comes back with a 4.7 out of 5.0 score, and school B’s survey comes back with a 4.8 out of 5.0 score – is school B really better than school A? Any basic data analytic course would tell us no and that these are statistically the same result…yet, we treat them as different when we create the ranking.”

Students working together at the Yale School of Management

WHAT MAKES A PROGRAM TRULY “GLOBAL?”

DeRue refers to such tricks as a “disservice,” one that portrays cracks as chasms. Another area where that may be true involves how “international” a program truly is. This “global presence” is often illustrated through how many students and faculty hail from outside the institution’s country. Rodriguez considers this a misnomer, as it doesn’t necessarily account for geographic breadth. It also doesn’t factor in context, adds Jain.

“A number of surveys will measure the proportion of faculty who are international,” he says. “That gives rise to, ‘How do you define international?’ Is it passports? Would people like me count as an American? Do I have an accent? Does living experience outside a home country count? What I teach (modeling) is a conventional subject matter where the degree to which I provide an international perspective may not be particularly relevant. Whether I am considered U.S. or not, my citizenship does not reflect in the material I teach or how I teach it.”

While the deans were happy to lay out the limitations of rankings, they weren’t shy about sharing solutions and alternatives either. For Rodriguez, customer satisfaction – student satisfaction, namely – is one of the best indicators of quality – and one where institutions hold far greater control.

ALL ABOUT CUSTOMER SERVICE

“What could be better measured is the teaching experience; the way one is treated as a whole by the organization; and the overall student experience – they all matter huge once students get to campus,” Rodriguez stresses. After you’ve made your choice, they really form the basis; you’re captive and the school should treat you really well once you’re no longer on the open market. Too often, we’re measuring them at the point of decision. We’re measuring everything about them while they’re here. The rest is harder to measure.”

Students celebrate their 2017 graduation from the University of Michigan

DeRue labels such a benchmark as “the process,” a qualitative means to better spotlight the experiential considerations alongside the usual inputs and outputs. “The process would consist of the educational experience, the culture and community of people, and the general access to career and professional development opportunities,” he reveals. “Too often, prospective students use rankings to generate a consideration set of schools based on input and output metrics, and then spend considerable resources trying to figure out and make sense of the experience.”

Customer satisfaction comes in other forms too. For DeRue, this means highlighting satisfaction rates through the lens of the ultimate consumer: employers. He urges outlets that produce rankings to better factor in the perspective of employers. In DeRue’s view, cognitive measures like GMAT remain important. However, he worries that programs sometimes spend too much energy on inputs at the expense of intangibles like a “diverse student body” that both enriches the learning experience and supplies a “more diverse talent pool.” True to form, he also believes ranking outlets should devote greater attention to what employers need from MBAs.

HOW MUCH IS AN MBA WORTH OVER THE LONG HAUL?

“Every single employer that I talk with indicates a strong need for greater analytical capability, more teamwork and collaboration skills, and enhanced leadership-related capabilities,” DeRue writes. “What if ranking agencies surveyed employers about what they need, and then aligned their employer surveys (and thus rankings) with those needs? This would be consistent with the premise that business schools are suppliers of talent, and our supply of talent needs to align with and meet the needs of these employers.”

Hiring an MBA is a six figure investment for most firms. That’s one reason why DeRue, again looking at rankings from an employer’s perspective, would plot a length of tenure point into his MBA ranking coupled with a “subjective assessment” of the “overall value” that each school’s MBAs deliver over time.

“The value of talent to employers is a function of (a) how much value the person offers and (b) how long the person stays at the organization,” he explains. “If an organization hires an MBA from a top school, my understanding from employers is that the return on their investment is negative until 2 or 3 years in. The market value (salary) is greater than the delivered value until the person has moved up the learning curve. After 2 or 3 years, the person begins to add value that is commensurate with his or her salary. If true, the average number of years a person stays with the employer is a key metric of value.”

TRACKING CHOICES TO PICK WINNERS AND LOSERS

The deans also shared ideas that would be game-changers in theory, but might be more difficult to pull off in practice. Jain’s emphasis on ‘revealed preferences’ is a case in point. An economic theory, revealed preferences examines how subjects actually behave when faced with a choice. In the case of MBA programs, it would measure how many students chose a particular school after they weighed it against an equal set of options (i.e. being accepted into other programs with equal financial aid offerings). Mind you, schools are loathe to share such information – and there is no centralized means to break down the thousands of decisions that revealed preferences would encompass. However, it presents an enticing alternative to similar, albeit hazy, measures like applications-per-seat and yield.

“There are many attributes relevant to the business school experience and the individual prospective students and they think about them with different weights attached to them,” Jain explains. “If you look at the schools that students choose – as far as School A, School B or School C – their revealed preferences – voting with their feet –  capture a lot of what is relevant to aggregate populations of students about all of the attributes of the school that gives rise to its reputation, prominence, and quality of educational experience.”

These revealed preferences serve another function too, Jain adds. “They also capture students’ expectations of future benefits that they will derive form the education – the network and so on. In some sense, what is relevant, at least to an aggregate population, is captured in the choices they end up making. A very simple parsimonious way to do a survey is to look at the win-loss ratios – What fraction go to School A vs. School B. You could also do tiers of schools or between schools as well.”

DOES LONG-TERM PERFORMANCE HAVE A PLACE?

Another gap in rankings, deans say, involves an outlook that is all too short-term. Law schools, for example, can measure learning (to an extent) by bar passage rates and long-term success by the number of alumni in key positions like judgeships. Jain, for one, would love to find a way to measure an education’s impact over 5-10 years – and even longer. Rather than wrestling with the chicken-or-the-egg argument – was success the reflection of inputs or education quality – Jain simply urges ranking outlets to factor in the business impact made by business schools.

A herculean task, no doubt, which may be why DeRue would scale it back to an extent. “If rankings continue to survey alumni, in my opinion, the focus should be on career success over some time period, or maybe even multiple time periods (short and long term).”

Overall, the deans urged ranking outlets to place less emphasis on GMATs, with Jain being the lone exception. In his experience, he has found a very tight correlation between GMAT and GPA scores and academic success. In many corners, Jain notes, popular opinion treats academic success as “inconsequential to the MBA experience” – a place where, Jain jests, “the top third of the class academically will work in the middle in companies started by the bottom third.”

Dean Peter Rodriguez, Dean of Rice University’s Jones Graduate School of Business

DOES THE GMAT PREDICT HEART AS MUCH AS SMARTS?

Jokes aside, incoming GMAT scores and outgoing academic performance may be far more than a narrow measure of success. Instead, they may predict why some students rise faster than others. “People who do well in the academic institution will do well outside of it,” Jain argues. “People who excel in one domain will continue to want to excel in other domains.”

Jain draws upon his own experiences to build upon this point. In the Indian educational system, applicants complete a brutal entrance exam that weeds out 99% of candidates. Indeed, this 1% excels in what Jain calls a “very narrow cognitive domain.” Fast forward 15 years, these same students are still racking up achievements across a wide range of professions – even creative endeavors, Jain says. This success, he believes, stems from something more essential than possessing innate intelligence or reaping the advantages of an elite education.

“My conjecture on why this is the case is not that the narrow mathematical and analytical ability that served them well in the entrance test ends up to being consequential to successful careers in leadership,” he grants. “There is something more fundamental about these tests.  Perhaps inadvertently, that more fundamental attribute that they pick up could be called grit, persistence, willingness to compete, or hard work. These tests have the ability to pick up things that are more much consequential.”

That’s why Jain, ever the contrarian, would like emphasis on academic inputs…just measured in a different way. “If surveys began to put more weight on these simple and fundamental attributes of the students, I think that will distinguish them. With the qualified GPA and GMAT or GRE, you can explain where students are going when they have choices and I think you explain in the long run which students turn out to be successful because these factors are more powerful than many of us realize.”

YOUR BUSINESS SCHOOL IS THE CHOICE OF A LIFETIME

Students gather on the campus of the London Business School

In essence, rankings are starting points, ones designed to help candidates narrow their choices. For LBS’ Ortalo-Magné, the process should really start with self-reflection, with a focus on the long-term over the here-and-now. In particular, he asks students to picture the type of person and alum they want to be. This vision will ultimately help them pick the culture, experience, and career path will help them become that person. And that’s an undertaking, Ortalo-Magné warns, that can’t be taken lightly.

“You’re committing, for rest of your life, to be an alum of this school,” he cautions. “You can’t, once you graduate from a program, say you’re going to divorce that program and become an alum of Wharton. You can’t do that. You will never be allowed to attach yourself to another brand. There is no price at which you can do that.”

So what does Ortalo-Magné do when applicants ask him for advice? He’ll whip out a ranking…with a caveat, of course. Instead of touting the main number, he’ll point to the underlying data. “Let’s say your GMAT is 720,” he postulates. “I’ll help you find the table with the quantile range of GMATs at the different schools. Then, I’m going to ask you to reflect as to what will be comfortable for you. Some people don’t mind being the lowest GMAT in the class. Others are used to being the smartest student in the class, so it would be a real challenge to be in the bottom quantile. That reveals something about your comfort and the kind of people you want to be around.”

Ortalo-Magné will continue by probing for risk tolerance so he can help students pinpoint what he calls “acceptable sets” in terms of placement outcomes, salaries, debt loads, and geography. Once he has helped applicants narrow these sets, he can guide them back to who they want to be and what type of experience they want to enjoy.

START WITH THE HARD QUESTIONS

“Once we have narrowed, I will be able to say, ‘Do you want to stay in the Midwest?’ ‘Do you want your range of GMATs to be between 680 and 740?’ Then I’d encourage them to look for stories about schools. Some programs are pretty explicit and understand their brand. After that, read student blogs and watch student YouTubes. Go to social media, get a sense of the place, and then go and visit and spend time with them. My sense is you will end up visiting 3-5 schools and then you’ll understand the community you will join.”

Rice’s Rodriguez follows a similar process predicated on a familiar axiom: Know thyself. In Rodriguez’s experience, it is hard for prospective students to make a mistake at the very top end of schools. Like Ortalo-Magné, he believes the best choices are governed by how well applicants understand their learning style and objectives.

Rice MBA students in class.

“It begins with a few basic questions. Do you want a program that’s high touch and can give you a lot of attention,” he poses. “There are a number of schools that can do that. Is your primary aim instead to have a program that caters to specific industries or even a set of firms? Remember that almost all schools are somewhat regional in their placement. There are only a handful of truly national schools that can boast that they can get you anywhere. Even there, you have to carefully think about the odds. So that’s another good question to answer.”

‘DON’T LOOK FOR THE EXTERNAL MEASURES TO ANSWER THE INTERNAL QUESTIONS’

In the end, the goal of any ranking is to reduce the possibility of making a bad choice. However, Rodriguez believes the closeness of the fit usually starts with the depth of the personal reflection.  “I often tell students, ‘Don’t look for the external measures to answer the internal questions. Answer the internal questions first and then go match them.’ That usually matters a lot… Know where you want to go and what you like and everything follows from that.”

Alas, Rodriguez believes rankings are the natural outgrowth of the importance of the decision. They are a logical and intelligible way to justify a six figure investment paired with a year or two absence from the workforce. They have a place in the discussion in his estimation – but perhaps not at the center.

“Rankings are critical in many ways – but no one can afford to invest too much in any of them,” Rodriguez adds. “They’re relatively volatile. Among all of us, there is this sense of not letting rankings cloud true change and advance or retreat in any performance measure. As deans, that’s what we’re really concerned about: Are we doing better by our students and alumni, year-to-year?”

DON’T MISS: RANKING THE BUSINESS SCHOOL RANKINGS OR 10 BUSINESS SCHOOLS TO WATCH IN 2018

The post How Business School Deans Would Change MBA Rankings appeared first on Poets&Quants.



from Poets&Quants
via IFTTT

No comments: