Variasi statistik dalam dua format kualifikasi Formula 1

15

Saya baru saja membaca ini artikel BBC ini tentang format kualifikasi di Formula 1.

Penyelenggara ingin membuat kualifikasi menjadi kurang dapat diprediksi, yaitu meningkatkan variasi statistik dalam hasilnya. Meliputi beberapa rincian yang tidak relevan, pada saat driver diberi peringkat oleh putaran tunggal terbaik mereka dari (untuk konkret) dua upaya.

Salah satu ketua F1, Jean Todt, mengusulkan bahwa peringkat driver dengan rata-rata dua lap akan meningkatkan variasi statistik, karena driver mungkin dua kali lebih mungkin untuk melakukan kesalahan. Sumber lain berpendapat bahwa rata-rata apa pun pasti akan mengurangi variasi statistik.

Bisakah kita mengatakan siapa yang benar berdasarkan asumsi yang masuk akal? Saya kira itu bermuara pada varian relatif - rata ( x , y ) versus min ( x , y ) , di mana x dan y adalah variabel acak yang mewakili dua putaran-kali pengemudi?mean(x,y)min(x,y)xy

innisfree
sumber

Jawaban:

5

Saya pikir itu tergantung pada distribusi waktu putaran.

Biarkan X,Y menjadi independen, didistribusikan secara identik.

  1. Jika , laluVar(X+YP(X=0)=P(X=1)=12Var(X+Y2)=18<Var(min(X,Y))=316.
  2. Namun, jika , maka V a r ( X +P(X=0)=0.9,P(X=100)=0.1Var(X+Y2)=450>Var(min(X,Y))=99.

Ini sejalan dengan argumen yang disebutkan dalam pertanyaan tentang membuat kesalahan (yaitu, menjalankan waktu yang sangat lama dengan probabilitas kecil). Jadi, kita harus mengetahui distribusi waktu putaran untuk memutuskan.

sandris
sumber
Menarik, saya kira sesuatu seperti ini juga berfungsi untuk rv terus menerus. Apa tepatnya yang salah dalam bukti sebelumnya?
innisfree
1
Sejauh yang saya mengerti, ia berpendapat bahwa diberikan xy, jarak antara x dan rerata selalu kurang dari jarak antara x and min(x,y), thus the variance of the mean must be smaller than the variance of min(x,y). This, however, does not follow: min(x,y) can stay consistently far away while the mean varies a lot. If the proof were based on an actual calculation, it would be easier to pinpoint the exact spot where it goes wrong (or check that it is valid after all).
sandris
2

Without loss of generality, assume yx and that both varaibles are drawn from the same distribution with a particular mean and variance.

{y,x} improves on {x} by,

case 1, mean: yx2,

case 2, min: yx.

Therefore, the mean has half the effect on the improvement (which is driven by the variance) than the taking the minimum (for 2 trials). That is, the mean dampens the variability.

James
sumber
I'm not convinced this is quite correct, could you please provide a formal explanation?
sandris
2

Here is my proof of Var[Mean]

For 2 random variables x,y there is a relation between their mean and max and min.

2Mean(x,y)=Min(x,y)+Max(x,y)
Therefore
4Var[Mean]=Var[Min]+Var[Max]+2Cov[Min,Max]
If we now assume that the distribution is symmetric around the mean then
Var[Min(x,y)]=Var[Max(x,y)]
Then
4Var[Mean]=2Var[Min]+2Cov[Min,Max]
and
Cov[Min,Max]<=sqrt(Var[Min]Var[Max])=Var[Min]
Therefore
Var[Mean]<=Var[Min]
It is also easy to see from this derivation that in order to reverse this inequality you need a distribution with very sharp truncation of the distribution on the negative side of the mean. For example for the exponential distribution the mean has a larger variance than the min.
sega_sai
sumber
1

Nice question, thank you! I agree with @sandris that distribution of lap times matters, but would like to emphasize that causal aspects of the question need to be addressed. My guess is that F1 wants to avoid a boring situation where the same team or driver dominates the sport year after year, and that they especially hope to introduce the (revenue-generating!) excitement of a real possibility that 'hot' new drivers can suddenly arise in the sport.

That is, my guess is that there is some hope to disrupt excessively stable rankings of teams/drivers. (Consider the analogy with raising the temperature in simulated annealing.) The question then becomes, what are the causal factors at work, and how are they distributed across the population of drivers/teams so as to create persistent advantage for current incumbents. (Consider the analogous question of levying high inheritance taxes to 'level the playing field' in society at large.)

Suppose incumbent teams are maintaining incumbency by a conservative strategy heavily dependent on driver experience, that emphasizes low variance in lap times at the expense of mean lap time. Suppose that by contrast the up-and-coming teams with (say) younger drivers, necessarily adopt a more aggressive (high-risk) strategy with larger variance, but that this involves some spectacular driving that sometimes 'hits it just right' and achieves a stunning lap time. Abstracting away from safety concerns, F1 would clearly like to see some such 'underdogs' in the race. In this causal scenario, it would seem that a best-of-n-laps policy (large n) would help give the upstarts a boost -- assuming that the experienced drivers are 'set in their ways', and so couldn't readily adapt their style to the new policy.

Suppose, on the other hand, that engine failure is an uncontrollable event with the same probability across all teams, and that the current rankings correctly reflect genuine gradation in driver/team quality across many other factors. In this case, the bad luck of an engine failure promises to be the lone 'leveling factor' that F1 could exploit to achieve greater equality of opportunity--at least without heavy-handed ranking manipulations that destroy the appearance of 'competition'. In this case, a policy that heavily penalizes engine failures (which are the only factor in this scenario not operating relatively in favor of the incumbents) promises to promote instability in rankings. In this case, the best-of-n policy mentioned above would be exactly the wrong policy to pursue.

David C. Norris
sumber
0

I generally agree with other answers that the average of two runs will have a lower variance, but I believe they are leaving out important aspects underlying the problem. A lot has to do with how drivers react to the rules and their strategies for qualifying.

For instance, with only one lap to qualify, drivers would be more conservative, and therefore more predictable and more boring to watch. The idea with two laps is to allow the drivers to take chances on one to try to get that "perfect lap", with another available for a conservative run. More runs would use up a lot of time, which could also be boring. The current setup might just be the "sweet spot" to get the most action in the shortest time frame.

Also note that with an averaging approach, the driver needs to find the fastest repeatable lap time. With the min approach, the driver needs to drive as fast as possible for only one lap, probably pushing further than they would under the averaging approach.

This discussion is closer to game theory. Your question might get better answers when framed in that light. Then one could propose other techniques, like the option for a driver to drop the first lap time in favor of a second run, and possibly a faster or slower time. Etc.

Also note that a change in qualifying was attempted this year that generally pushed drivers into one conservative lap. https://en.wikipedia.org/wiki/2016_Formula_One_season#Qualifying The result was viewed as a disaster and quickly cancelled.

Maddenker
sumber