[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2710105.QN8ttzUxoi@positron.chronox.de>
Date: Wed, 27 Apr 2016 19:47:33 +0200
From: Stephan Mueller <smueller@...onox.de>
To: Andi Kleen <andi@...stfloor.org>
Cc: Sandy Harris <sandyinchina@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-crypto@...r.kernel.org, Theodore Ts'o <tytso@....edu>,
Jason Cooper <jason@...edaemon.net>,
John Denker <jsd@...n.com>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: random(4) changes
Am Montag, 25. April 2016, 12:35:32 schrieb Andi Kleen:
Hi Andi,
> > > > If it is the latter, can you explain where the scalability issue comes
> > > > in?
> > >
> > > A single pool which is locked/written to does not scale. Larger systems
> > > need multiple pools
> >
> > That would imply that even when you have a system with 1000 CPUs, you want
> > to have a large amount of random numbers. Is this the use case?
>
> That is right. Large systems do more work than small systems.
> If the system is for example handling SSL connections it needs
> more random numbers to handle more connections.
I have ported the NUMA logic to the LRNG. It instantiates the secondary DRBG
once for each NUMA node just like your patch.
Though, the initialization of the instances of the secondary DRBGs is
different. I serialize the initialization such that one DRBG instance is
seeded at a time from the primary DRBG.
I tested the code by using the per-CPU logic instead of per-NUMA node. This
test shows that all works fine.
I then changed it to use a per NUMA node instance. It works on my test systems
which instantiate the DRBG only once as I only have one node.
May I ask you to test that code on your system as I do not have access to a
NUMA system? I will release a new version shortly.
Ciao
Stephan
Powered by blists - more mailing lists