Statements like these annoy me (no offense to you in particular, StormHawk42) because it shows a basic misunderstanding of the FPI (and other advanced metrics). And it's not your fault - it's how they are presented to the public by talking heads and how they were used by the NCAA during the BCS era. The FPI, along with most computer ratings, are not meant to be/were not necessarily designed to rank teams based on what they've achieved. They are meant to be predictive of the future. Using them as a way to rank what teams have accomplished over a season is disingenuous to their intended purpose (and the fault of this definitely lies upon the media, in my opinion).
The basic idea underlying most of these systems is that points scored and points allowed (adjusted for opponent strength, home field, etc.) is more predictive of future success than wins and losses. This has been shown to be true since at least the time of Bill James's early writings, and is primarily due to the increased randomness (or "luck") that is inherent in whether a team wins or loses, especially over a relatively small sample of 10-12 games or so. As a predictive measure, one could argue that the FPI was much better than human polls in predicting what would happen in the Rose Bowl (and, to a certain extent, most of Iowa's season). That's not to say outliers and unexpected results don't happen, though - they certainly do! And no metric, whether it's devised by humans or computers, will ever come close to being 100% accurate.
Recruiting rankings were *not* the primary reason that Iowa was ranked so low in the FPI heading into the Rose Bowl - other metrics that don't use recruiting rankings (e.g. Sagarin, TeamRankings, etc.) also had Iowa in the mid-20s to low-30s. Recruiting rankings are such a minor component of the FPI at that stage of the season that it was pretty negligible. The real reason was a relatively weak strength of schedule and not winning some games by as much as we "should" have (e.g. Minnesota, Illinois, Pitt, etc.).
With all of this said, advanced metrics are far from perfect, and you can definitely have issues with what certain systems include or how they weight certain factors. They do have a pretty decent track record, though, and match up very closely to Vegas lines (which, I'd argue, are a pretty good predictor in and of themselves). Humans have a very hard time thinking probabilistically (e.g. having an 80% chance to win is not the same thing as a "sure thing"), and, as a result, have a hard time interpreting these metrics in my opinion.