For those of you talking about how stars don't matter I would strongly encourage you to read the original article. No one is saying stars are the end all be all but actual math and science was used and it was proven they do in fact matter.
A quick statement just because you're hitting on an issue that relates to one of my pet peeves. You can ONLY truly prove something if you have PERFECT information or PERFECT data. Thus, one of the only areas I know where "proof" is the norm is in in the field of mathematics. Actual real, practicing mathematicians spend their entire careers continuing to "build" the language of mathematics by proving propositions about mathematical objects, mathematical syntax, or mathematical structures.
The standard form of such a proof is suppose you have two statements ... statement P and statement Q. A common scenario in math is that we might contend that the 2 statements P and Q are equivalent. To prove such an equivalence we'd have to verify that if P implies Q ... we must also find that Q implies P.
Otherwise, if you ever have uncertainty or imperfect data/information ... then you're in the murky realm of solving what are called "inverse problems." It turns out that for any data, there is an infinity of different models that can "fit" that data. If you had access to perfect data ... you'd ultimately try to add more and more constraints and more and more data until you'd be able to uniquely specify THE SINGLE model that "correctly" represents the data. Unfortunately, if you do not have perfect data, then you never can truly divorce yourself from the infinity of possible models that can describe your data.
For this aforementioned reason ... my bullshit meter starts pinging to 11 whenever anybody claims that they've "proven" anything.
You see, even in science, you have to be ultra careful about claiming "proof." Scientists can tell you A LOT about what things CANNOT be true ... that is to say that they can DISPROVE things (this reflects Popper's perspective on demarcation in science). Furthermore, scientists can "puzzle" fit ... and find models that are CONSISTENT with data (a view that Kuhn was very comfortable with). However, responsible scientists rarely claim that they've "proven" something tangible about the world around them. Of course, this isn't to say that scientists haven't uncovered "knowledge" about the world around them. Scientists have been able to generate specific descriptions about the world around that no experiment has ever been able to disprove. We're not just talking about a few experiments ... we're talking about thousands to millions of varying experiments that have been incapable of finding fault with the given descriptions.
I've read the 247 sports article about recruiting ... and there were many faults in their reasoning. I'm just one person ... finding fault with their article just be considering a few hypothetical scenarios. Suppose you take thousands of critical minds breaking down that same article ... they'll find many more faults. Thus, I'm sorry but that article doesn't PROVE a darn thing.
As dbrocket rightly highlights ... statements saying that "stars matter" or "stars don't matter" are sensational exaggerations. Recruiting certainly matters. However, the PROBLEM deals with how do you QUANTIFY recruiting? How do you rigorously ascertain the correct metrics? How do you rigorously manage uncertainties? These are interesting questions that these sports-related websites fail to address.
I have many friends who are information scientists ... and many of them tell me about how they generate model of things that they have no understanding about from huge quantities of data. I ask them how they can trust their validation procedures of their models ... when they don't even understand what they're trying to describe. You see, the problem with most validation procedures is that they're reliant on circular logic. They essentially use their own models/algorithms to test themselves. If the results fall within an empirically "reasonable" set of bounds ... they "trust" the models/algorithms. The response my friends supply .... "we have to try something." The suggestion being that something is better than nothing. I agree with this sort of premise IF the algorithms are constantly being iterated upon ... improved and updated to account for new data and new understanding. However, many algorithms and models are used ... and they're used as unquestioned black-boxes.