### Luka Dončić should shoot stepback free throws

On Wednesday night, Luka Dončić put the entire Dallas Mavericks organization on his ailing shoulders

An attempt at combining all the data into one MVP Score

When I worked at Wawa, I spent my free time trying to create the ultimate condiment. I tinkered and tested sauces individually and combined, mixing barbecue with buffalo and a bit of ranch like Professor Utonium from *The Powerpuff Girls*. Every condiment worked in isolation, but mixing and matching produced alchemy.

Today, instead of sauces, I spend my time looking at NBA stats and pondering the same possibility. Picking an MVP based on one data point proves difficult and untenable; *ESPN*'s RPM might pick Steph Curry as their top player, while *FiveThirtyEight*'s *RAPTOR* absolutely adores Nikola Jokić. Each metric offers utility but includes assumptions within the methodology.

What if you took all of those brilliant numbers, modeled and vetted and indicative of something useful, and managed to combined them all?

Zack Kram of *The Ringer* wrote a wonderful article on the challenges of tracking and quantifying defensive performance. He interviewed NBA analysts, front office executives, and researchers to understand how they approach this question, offering an outline on how fans might better use the available data to improve their understanding.

Kram used the difficulty of grading Nikola Jokić's defensive prowess as the example for his piece. He spoke to Daniel Myers, who invented the advanced box plus-minus stat. Within that analysis of Jokić, Myers offered an ideal framework for our MVP scoring idea.

First, use a “wisdom of the crowds” approach, blending different metrics to find a consensus average. “Every metric has players that are overrated. Every metric has players that are underrated,” Myers says. “If you look at all of them together, hopefully the blind spots offset each other.” If one metric thinks Jokic is terrible, one thinks he’s average, and one thinks he’s excellent, he’s probably somewhere in the middle, closer to average.

Myers and Kram referred specifically to the defensive side of performance, but their philosophy extends nicely to evaluating players for MVP.

Owen Phillips of the excellent *F5* covered a similar theory, where he offered to help *ESPN's *Zach Lowe track advanced statistics by collecting all of them in a digestible spreadsheet. Phillips rightly pointed out that merely aggregating these stats, absent an understanding of each metric's expected scale, could easily mislead an analyst.

For example, Nikola Jokic leads the league in PER (Player Efficiency Rating) at 31.3, which is about six points higher than the 10th place player in PER. Jokic also leads the league in VORP (Value Over Replacement Player) at 3.3, which is only a point and a half higher than the 10th place player in VORP. In other words, there’s more spread in PER than there is in VORP. And if we were to add them together it would erase the fact that a single VORP point is “worth” more than a single PER point.

Each stat produces a distribution with different degrees of separation. Some, like PER, can create massive chasms between the best and the worst. For instance, among qualified players through 5/12, *Stathead *had Joel Embiid as the league PER leader at 31.1 PER, with Rodney Hood at the bottom at 5.6. A difference of 25.5 could wipe out the entire spectrum of results for more tightly distributed stats.

Long-time readers of the site might remember the MVP Scoring model I built, an aggregation of impact metrics to come to one unified score. That methodology looked at the count of standard deviations, or *z-score*, for statistics that often appeared in MVP discourse.

Measuring results by their z-score frames a player's performance relative to their peers. Steph Curry and Bradley Beal—the two top scorers by points per game this season—would produce high z-scores for that metric, and that gap between Beal and the average player helps increase Beal's total score. Essentially, the z-score tries to measure just how much better someone did in each category, measured solely against his peers.

It avoids the scale issue that Phillips covered by looking at the individual distributions of each metrics before aggregating; Joel Embiid's PER excellence becomes a 3.49 z-score (roughly 3.49 standard deviations above the average PER for his peers) while Rodney Hood lands at a -1.9. Instead of a 25 point gap, the z-score now adjusts for PER's distribution and assigns a more translatable score.

My initial post focused on the stats available on *Basketball Reference*, but as Phillips covered in his spreadsheet newsletter, there's a whole new world of metrics. Analysts, researchers, and data scientists created new models and methodologies to better understand player quality, from impact metrics like *FiveThirtyEight*'s *RAPTOR* to luck-adjusted all-in-one tracking like *RAPM*.

Emboldened by Myers and Kram's plea for metric diversity, I gathered as many variables as possible for this aggregate score—I pulled most of this data through 5/12 where available. This ended up at 44 distinct measures, combining data from* B-Ball Index's *LEBRON, *ESPN's RPM, FiveThirtyEight,* *NBA Shot Charts*, and *Basketball Reference*.

Instead of litigating metric quality, I'm curious what it looks like when you mash them all into one.

The league's most fascinating passer should become the most valuable player by season's end. Here's the total combined score every player with at least 40 games played this season, distributed in this box plot. Scores in the 25th to 75th percentile fall inside the blue box, with an average score as the black line in the middle. Brackets on each end show the total distribution, with a few players sneaking out above the rest as extreme outliers.

You might notice Jokić.

This spread makes sense intuitively. With forty-four measures included, players could cancel out contributions in one stat like defensive plus/minus with demerits in areas they does not thrive at, like assists per game. A score of zero implies a wholly average player, while superstars that excel in multiple fields start to outpace the pack by producing above-average results across the board.

Among those outliers, Jokić stands at the top of the top. He's outpacing the rest of the top five MVP candidates in total scoring.

He excels in so many stats, but particularly thrives in five of the most impactful. Jokić leads the league in Value over Replacement Player (VORP), Offensive Win Shares, Overall WAR, Wins Added, and Box Plus/Minus.

In short, each metric riffs on the idea of measuring impact. VORP and Overall WAR frame Jokić's output relative to an average player. Offensive Win Shares tracks his ability to contribute to success on offense, while box plus/minus analyzes the differences when Jokić is on and off the court. *FiveThirtyEight*'s Wins Added tries to calculate the actual wins driven by Jokić himself.

This swarmplot shows all eligible players by their z-score in each; the gap between Jokić and his peers highlights the excellence of his season.

Those stats tell the story of Jokić as an offensive supernova, a player contributing more to his team's box score success than anyone in the league.

Like Kram outlined in his piece at *The Ringer*, evaluating an individual on only one metric can leave you at the mercy of biases and issues with that metric. Extending your analysis to include a variety of data points surfaces a broader trend, and that trend tells you that Jokić earned this MVP trophy.

In future posts, we'll take a look at the rest of the top five, and critique some of the methodology that left stalwarts like Steph Curry and LeBron James out of this fictional MVP ballot.