Chris - RAPM does not stand for "Real Adjusted Plus Minus", it's "Regularized Adjusted Plus-Minus" or "Ridge-Regressed Adjusted Plus-Minus". So called because it is similar to the basic "Adjusted Plus-Minus" (APM) but uses ridge-regression instead of OLS in order to build the model. The evolution from OLS (APM) to Ridge (RAPM) was in fact to tackle the problem of multicollinearity. I've described the most common versions in the quote at the bottom of this post.
Also, raw +/- (what you're referring to with the Iggy stuff) shouldn't really be taken to mean anything beyond face value. When Andre Igoudala was on the floor in the 2015 Finals, Golden State outscored Cleveland by 62 points. Any conclusion beyond that is the fault of the individual, not the stat, because it very literally says nothing more than that. Did any of the voters actually say that they voted for Iggy over LBJ because of it? Cause if so, that's on THEM, not +/-. (FWIW, I'm in support of James winning it last year).
On RAPM being predictive: it's not just future seasons - it's future games. With RAPM & 50% of the season completed, you can use it to predict the outcome of the remaining 50%, and you will out-predict the results of other stats trying to do the same.
Why does that matter? It matters because it's a better indicator of goodness vs. value - process-based thinking instead of results-based (we often use the same mindset in the financial/investment world when it comes to valuation). If Andre Miller gets randomly hot and is allowed to take 25 shots one game and scores 51, the stats will "explain" him as having been good to X degree, but we know that in a vacuum he is not actually good to X degree, he is only good to Y degree. An explanatory stat will give more weight to X, whereas a predictive stat will likely do better in trying to capture Y.
As for Berri's comments, without getting into his obvious (and understandable) motivations for speaking against +/- in the face of his own stats, it's pretty straightforward to establish the case for scoring margin as being crucial in describing player/team goodness.
A.) Teams score X points and use Y possessions. Teams which, on average, have the best ratio of X:Y (ORTG or offensive efficiency) are typically the best offensive teams, with some caveats (ORB strategy, small-ball, resiliency - does your offense hold up against all teams or do you struggle against better strategies).
B.) Same thought process for defense, teams allow X points and their opponents use Y possessions. The lower the X:Y ratio (DRTG or defensive efficiency), the better the team's defense (with the same caveats). We can multiply the ratios by 100 to get a cleaner looking number, which are BBR's ORTG and DRTG.
C.) Therefore, the best overall teams, generally speaking, are those that have the largest separation between their ORTG/DRTG (GSW and SAS this season). The caveats above still apply - you want to be consistent. A team with +10 differential that plays like a +10 against all teams is better than a +10 team that plays like a +6 against some opponents and +14 against others.
D.) Continuing that line of thought, good players are those who improve their team's scoring margin/efficiency differential. An average player is a net 0. A bad player worsens their team's differential. Similar caveats apply here - consistency is important, but so is versatility/fit. A player who improves bad/average/good teams by 5 points is better than one who improves bad teams by 7, average teams by 5, and good teams by 3 (this means his skills are redundant as your team acquires more talent around him).
E.) Players/opponents are frequently in/out of lineups & games, which means over large time periods (multi-season for APM, >=1 season for RAPM) we can look at the 5on5 matchup +/- differentials of tens of thousands of lineups simultaneously and utilize regression (weighted by possessions played by EACH particular 5on5 matchup) to extract an estimate of each player's impact on their team's scoring margin. There are multiple ways to do this, and we've moved from APM (standard OLS) to RAPM (Ridge-Regression) to Prior-informed & Weighted-Multi-year RAPM (slight variation).
F.) None of the models described in E., nor the ones in my quote below are meant to be a definitive player rater, because no matter what, 1.) we just don't have the ability to completely isolate a player's value to his team, and 2.) a player's value to his team is not necessarily his value in a vacuum. Again, anyone who attempts to use them so definitively is at fault, not the stats themselves.
Briefly, SPM or Statistical Plus-Minus is actually what Berri describes in your article (box-score regressed on APM/RAPM/etc.) - it's an attempt to encapsulate how much particular box-score stats affect a player's APM/RAPM score. RPM, on the contrary, is a blend of RAPM & SPM in an attempt to increase predictive power. I have issues with it, but I recognize the purpose it is trying to serve. If ESPN or anyone else wants to use it as a catch-all metric, then they're stupid - it can't quite do that.
------------------------------------
(whew)
So, if you were unconvinced before, I've probably said a whole lot of nothing. WTH does all of this mean?
It's pretty simple (and again, I think I'm disagreeing pretty staunchly with Professor Berri here), there are tens of thousands of actions on the basketball court (see the growing prominence of SportsVU and Vantage Sports data) and the box-score captures a very small fraction of these actions. When we perform all this fancy statistical voodoo with +/-, we can do a much better job of encapsulating the total effect of ALL of these actions, BUT we cannot isolate what is driving them.
The box-score yields a problem of incompleteness.
+/- yields a problem of lack of specificity/granularity.
Let's think of a quick example.
Lebron high screen, Delly handling, JR in the corner, Frye on the wing.
Delly notes that Bron draws JR's man slightly off weakside corner to pressure rolling Bron. He swings it to him, but Frye's man had shifted over to deter JR's shot so JR kicks it to Frye who drains an open 3. +3 for Cleveland.
Delly gets a hockey assist, JR gets an assist, Frye gets a 1-1 3PM, Lebron gets nothing. BUT Lebron created the bulk of the opportunity by A.) being the deadliest finisher in the game, B.) screening the ball, C.) rolling towards the basket. Delly deserves credit for seeing/making the correct pass, JR deserves credit for being a good enough shooter to draw some attention from Frye's man and for making the kick. Frye deserves credit for draining the shot.
How do we distribute out that +3? We can try to assign credit ourselves, but at the end of the day, it'll be arbitrary. RAPM, on the other hand, can do so with more accuracy than we can - it just looks at how successful 4-man units + Lebron are, and how successful those same 4-man units are with another player in place of Lebron, and then make a determination of how much impact Lebron is estimated to have had on said lineup (while adjusting for strength of opposing 5-man units). Problem is, it doesn't JUST tell us that, it just tells us how much Lebron impacts EVERY possible 4-man unit, so we can't say how much credit he deserves for that +3, but we CAN say how much credit he deserves for the whole year (scaled to 100-possessions). Obviously with a margin of error, but, to put it frankly, that's something that's present with basic-level data anyway - hardly enough to just dismiss it.
Perhaps one day, the SportsVU/Vantage-type data will be broadly available to the public and fanbase, and then we can break these things down individually and create far more accurate statistical profiles. However that day is not here yet, and until then it's smart to be accepting of any and all possible tool. Just be prescient of its weaknesses rather than outright dismissing it.