Football season is upon us. September 10th at 8:30 PM EST New England and Pittsburgh will open the NFL regular season. This season I will continue to predict NFL outcomes, which you can find from this site every week. However, as we enter into the 2015 football season, I thought it appropriate to review last season and then introduce some changes that I’ve made.
Last season I created a model that only took into account three variables which included which team’s offense (1) and defense (2) was better, and also considered home field advantage (3). To be clear, the model was intended to be just a simple experiment. That model was used to predict each regular season game and then compared to two other models, Nate Silver’s Elo and Microsoft Bing’s Cortana, and then went on to compete in the post season. Let’s take a look at how it fared within the regular season.
Nate Silver’s Elo came out on top with a 176-80 (69%) record. The Bischoff model came in 2nd place out of the three with a 172-84 (67%) record. And finally, Cortana came in third with a 169-87 (66%) record.
Breaking down the numbers further, the following graph shows how many correct predictions were made each week, by model.
Elo was the most consistent model, having its lowest performing percentage be 56% in weeks 2 and 13. The Bischoff model had a little more volatility, which had its lowest performing percentage be 44% in weeks 2 and 16. Another notable week was number 5, where the Bischoff model performed at 93%. This was the highest any of the three models. Cortana had the lowest performance, having its lowest performing percentage be 38% in week 4. It’s important to note weeks 8-11, where all three models had almost identical performances before falling into more volatility in the last stretch of the season.
In 2014, there was no clear indication of which model captured the season better, besides Elo’s four predictions over the Bischoff model. I am excited to see what the 2015 season brings.
For the 2015 season, I’d like to introduce my new model: Amos. Amos now takes into account more variables than the original, expanding to eight different variables. Amos also incorporates a percentage associated with each win. This should help communicate how strong the model predicts the win to be. Therefore, the higher the percentage indicated by Amos, the greater the probability the predicted team has of winning.
As any sports analyst will tell you, underdogs always have a chance of winning. This is captured by the remainder (the compliment) of the percentage. For example, if Amos predicts Green Bay has a 73% chance of winning its week 1 game against Chicago, then Chicago has a 27% chance of winning.
Additionally, I will compare Amos’ predictions (and results) beside other models where I am able to. As of now, I have not heard anything on whether Elo will be coming back or if Cortana will be making a return either, however I will try to identify other contenders to gauge the performance of Amos. What fun would this be if there was no competition?
Finally, and probably the biggest addition to the 2015 season, Amos will be simulating the remainder of the regular season each week, and predicting every team’s ending records. The first 2 weeks will be highly volatile in outcomes as more data is fed into the model and simulation.
We can expect high volatility at the beginning of the season, but should begin to see Amos predict better throughout the season. Week to week, we should also be able to tell how Amos interprets a team’s performance on a team’s ending record.
Week 1: 11-5
Week 2: 9-7
Week 3: 11-5
Week 4: 9-6
Week 5: 10-4
Week 6: 10-4
Week 7: 9-5
Week 8: 10-4
Week 9: 7-6
Week 11: 10-4
Week 12: 9-7
Week 13: 11-5
Week 14: 9-7
Week 15: 14-2
Week 16: 7-9
Week 17: 9-7