The law of big numbers and luck.
Try this experiment called small portion vs big portion. Flip a coin 10 times then record the results. Call that small portion. Flip a coin 100 times then record the results that would be the big portion. If you get heads 8 out of 10 (80%) the percentage isn’t completely improbable.
In the big portion if you got heads 80 out of a 100 (80%) we would be checking the coin to see if it was fixed.
That is because the smaller the sample size the greater the variation in the proportion of the result.
Measuring the absolute number of heads is biased toward the big portion; but measuring the highest rates or the lowest ones puts the smallest portions in the lead.
The point is do not hold too much weight in recency bias.
An example of a small portion with misleading numbers
In the 2019 NFL season the top five highest completion percentages for QBs went
Tim Boyle 75 .0%,
Drew Brees 74.3%,
Derek Carr 70.4%
Ryan Tanellhill 70.3%.
If you asked yourself who Tim Boyle is? That is a great question. How could he qualify if he only attempted 4 passes in 2019?
If you asked who Matt Schaub is? I would tell you he must be a smart guy since we graduated from the same High School. Even though Schaub actually started a game in 2019 he only attempted 67 passes thus he should not qualify either.
No one thinks these two qbs are more accurate than Drew Brees, (not even my wife and she doesn’t watch sports). But the small portion size could mislead if not interpreted correctly.
The league average for completion percentage in the NFL was 63.5 (This number includes all players throwing passes, punters , running backs and wide receivers.)
The more attempts Tim Boyle and Matt Scaub would attempt the law of big numbers would drive their percentages down closer to league average.
The point of all of this is there will inevitably be streaks of wins and losses. It is a mathematical law.
Just because you flipped heads 8 out of 10 does not mean your next flip has an 80% chance of being heads. It is always 50%. Coins do not have memory. What happens is the old percentage is going to be diluted with new data to the point it makes the original average forgotten.