lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 4 Oct 2022 21:27:21 -0700 From: Jakub Kicinski <kuba@...nel.org> To: netdev@...r.kernel.org Subject: netdev development stats for 6.1? Hi! For a while now I had been curious if we can squeeze any interesting stats from the ML traffic. In particular I was curious "who is helping", who is reviewing the most patches (but based on the emails sent not just review tags). I quickly wrote a script to scan emails sent to netdev since 5.19 was tagged (~14k) and count any message which has subject starting with '[' as a patch and anything else as a comment/review. It's not very scientific but the result for the most part matches my expectations. A disclaimer first - this methodology puts me ahead because I send a lot of emails. Most of them are not reviews, so ignore me. Second question to address upfront is whether publishing stats is useful or mostly risks people treating participation as a competition and trying to game the system? Hard to say, but if even a single person can point to these stats to help justify more time spent reviewing to their management - it's worth it. That said feedback is very welcome, public or private. The stats are by number of threads and number of messages. Top 10 reviewers (thr): Top 10 reviewers (msg): 1. [320] Jakub Kicinski 1. [538] Jakub Kicinski 2. [134] Andrew Lunn 2. [263] Andrew Lunn 3. [ 51] Krzysztof Kozlowski 3. [122] Krzysztof Kozlowski 4. [ 51] Paolo Abeni 4. [ 80] Rob Herring 5. [ 47] Eric Dumazet 5. [ 78] Eric Dumazet 6. [ 46] Rob Herring 6. [ 70] Paolo Abeni 7. [ 35] Florian Fainelli 7. [ 65] Vladimir Oltean 8. [ 35] Kalle Valo 8. [ 58] Ido Schimmel 9. [ 32] David Ahern 9. [ 58] Michael S. Tsirkin 10. [ 31] Vladimir Oltean 10. [ 57] Russell King These seem to make sense, but the volume-centric view shows. Note that the numbers are very close so the exact order is of little importance. The names should be familiar to everyone, I hope :) Top 10 authors (thr): Top 10 authors (msg): 1. [ 84] Zhengchao Shao 1. [287] Zhengchao Shao 2. [ 52] Vladimir Oltean 2. [232] Vladimir Oltean 3. [ 43] Jakub Kicinski 3. [166] Saeed Mahameed 4. [ 28] Tony Nguyen 4. [156] Kuniyuki Iwashima 5. [ 28] cgel.zte@...il.com 5. [134] Sean Anderson 6. [ 23] Stephen Rothwell 6. [122] Oleksij Rempel 7. [ 23] Hangbin Liu 7. [106] Tony Nguyen 8. [ 20] Wolfram Sang 8. [ 93] Mattias Forsblad 9. [ 20] Kuniyuki Iwashima 9. [ 93] Jian Shen 10. [ 20] Jiri Pirko 10. [ 86] Jakub Kicinski Here Stephen is probably by accident as I was counting his merge resolutions as patches. What is clear tho (with the notable exception of Vladimir) - most of the authors are not making the top reviewer list :( And here is the part that I was most curious about. Calculate a "score" which is roughly: 10 * reviews - 3 * authorship, to see who is a "good citizen": Top 10 scores (positive): Top 10 scores (negative): 1. [4102] Jakub Kicinski 1. [397] Zhengchao Shao 2. [1848] Andrew Lunn 2. [116] Kuniyuki Iwashima 3. [737] Krzysztof Kozlowski 3. [105] cgel.zte@...il.com 4. [620] Paolo Abeni 4. [ 93] Mattias Forsblad 5. [611] Rob Herring 5. [ 82] Yang Yingliang 6. [588] Eric Dumazet 6. [ 82] Sean Anderson 7. [429] Florian Fainelli 7. [ 77] Daniel Lezcano 8. [418] Kalle Valo 8. [ 68] Stephen Rothwell 9. [406] David Ahern 9. [ 67] Arun Ramadoss 10. [344] Russell King 10. [ 64] Wang Yufen Now looking at companies. [Using my very rough mapping of people to company based on email domain and manual mapping for major contributors] Top 7 reviewers (thr): Top 7 reviewers (msg): 1. [369] Meta 1. [640] Meta 2. [139] Intel 2. [306] RedHat 3. [134] Andrew Lunn 3. [263] Andrew Lunn 4. [127] RedHat 4. [243] Intel 5. [ 80] nVidia 5. [193] nVidia 6. [ 71] Google 6. [134] Linaro 7. [ 61] Linaro 7. [121] Google Top 8 authors (thr): Top 7 authors (msg): 1. [207] Huawei 1. [640] Huawei 2. [103] nVidia 2. [496] nVidia 3. [ 96] Intel 3. [342] Intel 4. [ 94] RedHat 4. [332] RedHat 5. [ 75] Google 5. [263] NXP 6. [ 60] Microchip 6. [170] Linaro 7. [ 59] NXP 7. [157] Amazon 8. [ 51] Meta Top 12 scores (positive): Top 12 scores (negative): 1. [4763] Meta 1. [887] Huawei 2. [1848] Andrew Lunn 2. [145] Microchip 3. [1432] RedHat 3. [105] ZTE 4. [1415] Intel 4. [ 95] Amazon 5. [ 680] Linaro 5. [ 93] Mattias Forsblad 6. [ 652] Google 6. [ 68] Stephen Rothwell 7. [ 627] nVidia 7. [ 59] Wolfram Sang 8. [ 609] Rob Herring 8. [ 57] wei.fang@....com 9. [ 429] Florian Fainelli 9. [ 56] Arınç ÜNAL 10. [ 418] Kalle Valo 10. [ 53] Sean Anderson 11. [ 368] Russell King 11. [ 48] Maxime Chevallier 12. [ 356] David Ahern 12. [ 46] Jianguo Zhang The bot operators top the list of "bad citizens" as they do not contribute to the review process. Microchip and Amazon also seem to send a lot more code than they help to review. Huge *thank you* to all the reviewers!
Powered by blists - more mailing lists