[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Yh5CwQgsdNefs2ZW@linutronix.de>
Date: Tue, 1 Mar 2022 16:58:57 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: "Jason A. Donenfeld" <Jason@...c4.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Eric Biggers <ebiggers@...nel.org>,
Theodore Ts'o <tytso@....edu>,
Dominik Brodowski <linux@...inikbrodowski.net>
Subject: Re: RFC: Intervals to schedule the worker for
mix_interrupt_randomness().
On 2022-02-28 19:58:05 [+0100], Jason A. Donenfeld wrote:
> Hi Sebastian,
Hi Jason,
> I'm actually trying quite hard not to change the details of entropy
> gathering for 5.18. There are lots of little arguments for why each
…
> random.c. So I'd like to minimize changes to the semantics. Right now,
> those semantics are:
>
> A) crng_init==0: pre_init_inject after 64 interrupts.
> B) crng_init!=0: mix_pool_bytes after 64 interrupts OR after 1 second
> has elapsed.
Yes. I double checked, that it was not a recent change during that
rework. So yes, lets keep it as is, I just wanted to point that out.
…
> But all this brings me to what I'm really wondering when reading your
> email: do your observations matter? Are you observing a performance or
> reliability issue or something like that with those workqueues
> pending? Is this whole workqueue approach a mistake and we should
> revert it? Or is it still okay, but you were just idly wondering about
> that time limit? As you can tell, I'm mostly concerned with not
> breaking something by accident.
I noticed it because I backported the required patches (not everything
from your queue, just the patches I needed to drop mine and have
everything working). During testing I noticed that the worker is
scheduled more often than I expected and I looked that it is scheduled
and not accidentally stops due to a backport that went wrong. And since
I got the condition wrong…
But you are asking for performance. I run b.sh which does:
- unpack a kernel to /dev/shm
- build allmodconfig
and then invoked it with "perf stat -r 5 -a --table ./b.sh" to get some
numbers. I applied your complete queue on top of v5.17-rc6, and the
result was:
| Performance counter stats for 'system wide' (5 runs):
|
| 45.502.822,32 msec cpu-clock # 32,014 CPUs utilized ( +- 0,05% )
| 9.479.371 context-switches # 208,419 /sec ( +- 0,08% )
| 839.380 cpu-migrations # 18,455 /sec ( +- 0,38% )
| 624.839.341 page-faults # 13,738 K/sec ( +- 0,00% )
|105.297.794.633.131 cycles # 2,315 GHz ( +- 0,01% )
|77.238.191.940.405 stalled-cycles-frontend # 73,37% frontend cycles idle ( +- 0,01% )
|56.724.314.805.475 stalled-cycles-backend # 53,89% backend cycles idle ( +- 0,02% )
|69.889.082.499.264 instructions # 0,66 insn per cycle
| # 1,10 stalled cycles per insn ( +- 0,00% )
|14.670.304.314.265 branches # 322,550 M/sec ( +- 0,00% )
| 561.326.606.978 branch-misses # 3,83% of all branches ( +- 0,02% )
|
| # Table of individual measurements:
| 1419,113 (-2,247) #
| 1422,552 (+1,192) #
| 1420,773 (-0,587) #
| 1422,362 (+1,002) #
| 1422,001 (+0,641) #
|
| # Final result:
| 1421,360 +- 0,641 seconds time elapsed ( +- 0,05% )
Checked a few commit earlier, before the workqueue rework started
"random: rewrite header introductory comment":
| Performance counter stats for 'system wide' (5 runs):
|
| 45.508.013,44 msec cpu-clock # 32,034 CPUs utilized ( +- 0,05% )
| 9.456.280 context-switches # 208,017 /sec ( +- 0,11% )
| 837.148 cpu-migrations # 18,415 /sec ( +- 0,30% )
| 624.851.749 page-faults # 13,745 K/sec ( +- 0,00% )
|105.289.268.852.002 cycles # 2,316 GHz ( +- 0,01% )
|77.233.457.186.415 stalled-cycles-frontend # 73,38% frontend cycles idle ( +- 0,02% )
|56.740.014.447.074 stalled-cycles-backend # 53,91% backend cycles idle ( +- 0,02% )
|69.882.802.096.982 instructions # 0,66 insn per cycle
| # 1,10 stalled cycles per insn ( +- 0,00% )
|14.670.395.601.080 branches # 322,716 M/sec ( +- 0,00% )
| 560.846.203.691 branch-misses # 3,82% of all branches ( +- 0,01% )
|
| # Table of individual measurements:
| 1418,288 (-2,347) #
| 1420,425 (-0,210) #
| 1420,633 (-0,001) #
| 1421,665 (+1,030) #
| 1422,162 (+1,528) #
|
| # Final result:
| 1420,635 +- 0,669 seconds time elapsed ( +- 0,05% )
and then on v5.17-rc6:
| Performance counter stats for 'system wide' (5 runs):
|
| 45.459.406,05 msec cpu-clock # 32,009 CPUs utilized ( +- 0,04% )
| 9.460.380 context-switches # 208,171 /sec ( +- 0,10% )
| 837.571 cpu-migrations # 18,430 /sec ( +- 0,30% )
| 624.859.326 page-faults # 13,750 K/sec ( +- 0,00% )
|105.247.262.852.106 cycles # 2,316 GHz ( +- 0,01% )
|77.185.603.119.285 stalled-cycles-frontend # 73,34% frontend cycles idle ( +- 0,01% )
|56.688.996.383.094 stalled-cycles-backend # 53,87% backend cycles idle ( +- 0,02% )
|69.883.077.705.602 instructions # 0,66 insn per cycle
| # 1,10 stalled cycles per insn ( +- 0,00% )
|14.670.347.661.094 branches # 322,813 M/sec ( +- 0,00% )
| 561.066.414.554 branch-misses # 3,82% of all branches ( +- 0,01% )
|
| # Table of individual measurements:
| 1418,142 (-2,061) #
| 1420,187 (-0,016) #
| 1421,242 (+1,039) #
| 1420,800 (+0,597) #
| 1420,644 (+0,441) #
|
| # Final result:
| 1420,203 +- 0,542 seconds time elapsed ( +- 0,04% )
It does not appear that something stands out.
> Regards,
> Jason
Sebastian
Powered by blists - more mailing lists